Social media sharing previews with Open Graph


One of the features I had in mind for was the ability to display definition previews when shared on social media. That, and SEO, is the reason why I opted for server-side rendering for this project. NextJS was making the headlines at the time, and I decided to give it a shot. This was purely hype-driven and if I were pressed for time, I would have gone with Angular since I had more experience with SSR under Angular Universal. But it was an interesting learning experience nonetheless. For the deployment, I went with Vercel since it goes hand in hand with NextJS. The backend, written in Django, was also deployed on Vercel. Django on Vercel was unorthodox to say the least. It came with its fair share of headaches, but I'll have to address those in another article.


Social sharing previews rely on a protocol called Open Graph. It allows you to define metatags that social network robots will read in order to display "rich objects". For the purposes of this article, I only had to define the basic metatags like og:title, og:description and og:image. This was straightforward with NextJS 12 :

  const title = `Newest Moroccan slang entries | Page ${page}`;
  const description = "Find the latest developments of Moroccan Slang here.";
  return (
        <meta name="robots" content="index, follow" />
        <meta charSet="UTF-8" />
        <meta name="description" content={description} />
        <meta name="og:description" content={description} />
        <meta name="og:image" content={OgImage.src} />
        <meta name="twitter:image" content={OgImage.src} />
        <meta name="twitter:card" content="summary_large_image" />

The downside is that I couldn't refactor this into a separate component due to a NextJS limitation :

"title, meta or any other elements (e.g. script) need to be contained as direct children of the Head element, or wrapped into maximum one level of <React.Fragment> or arrays—otherwise the tags won't be correctly picked up on client-side navigations."

I think v13 improved upon this.

This worked as expected in some online debugging tools, Facebook, Telegram, Instagram, but only partially for Twitter. Through trial and error, I discovered that Twitter requires that the image has dimensions of 1200x630. To that, I added a summary_large_image twitter:card meta tag for good measure, and Twitter finally agreed to display large cards with working images. No matter what I tried, I couldn't get regular-sized cards to display images. This was difficult to iterate on because Twitter has a tendency of caching the previews and I kept adding bogus GET parameters to bypass it. This approach was flaky, so the fact that it didn't work for me could be attributed to a lot of factors.

Dynamic image previews, RTL and centering woes

Now for the juicy part : dynamic image previews. I wanted the definition page to display an Arabic rendition of the word (or expression) in the preview image. Here's the desired effect :

This meant that the og:image meta tag had to point at a URL that somehow generates an image containing the word in question. I looked into it and what do you know, Vercel already has support for OG image generation. It even supports custom fonts. All I had to do was to define an API to return a well-crafted ImageResponse :

export default async function handler(request: NextRequest) {
  const url = new URL(request.url);
  const arabicTitle = url.searchParams.get("arabicTitle");

  const maghrebi = await fetch(
    new URL("../../public/fonts/maghrebi.ttf", import.meta.url)
  ).then((res) => res.arrayBuffer());

  return new ImageResponse(
          backgroundColor: "#fffef0",
          color: "#371400",
          width: "100%",
          height: "100%",
          display: "flex",
          flexDirection: "column",
          alignItems: "center",
          justifyContent: "center",
          border: "1px solid black",
          textAlign: "center",
            fontFamily: '"maghrebi"',
            fontSize: "4rem",
      width: 711,
      height: 374,
      fonts: [
          name: "maghrebi",
          data: maghrebi,
          style: "normal",

And reference it in the relevant meta tag :


This worked OK even though RTL isn't technically supported by Satori, the underlying engine. The downsides included off centering in certain situations :

And a reversed order when the expression consists of multiple words :

قبر الحياة

With that in mind, I decided to roll my own version of vercel/og. It didn't have to include all the bells and whistles, it only had to display a correctly centered expression with the required font and colors. I started playing around with ImageMagick and after a lot of experimentation, I ended up with this command :

convert -size 1200x630 xc:'#fffef0' \(\
    -gravity center \
    -background none\
    -pointsize 72\
    -fill '#371400' \
    -font Samir_Khouaja_Maghribi \
    -size 1200x180 \
    "pango: قبر الحياة " \
    -trim \
  \) \
  -gravity center \
  -composite result.png

feh result.png

To get there, I had to :

  • Use pango: instead of label:, otherwise the letters are reversed and don't have ligatures
  • Install the font system-wide by moving it to /usr/local/share/fonts
  • Fix vertical centering with pango. As a workaround, I had to first generate the text with a small height, then vertically center it in an outer canvas using -gravity center.
  • Fix horizontal centering. It was still off in ImageMagick but thanks to this trick, I surrounded the input with spaces then trimmed it afterwards to fix the font weirdness

Once I got that part sorted out, I wrapped it in a simple PHP script to make it accessible via HTTP. I also added a rudimentary caching mechanism to avoid regenerating images for the same input :

setlocale(LC_CTYPE, "en_US.UTF-8"); //otherwise escapeshellarg removes arabic letters
header('Content-Type: application/json');
    exit(json_encode(['error' => 'Missing parameter: text']));

$text = escapeshellarg($_GET['text']);
$cacheKey = base64_encode($text);
$filename = '/tmp/' . $cacheKey . '.png';


$result = system("sh /home/hassan/ " . $text . " " . escapeshellarg($filename));
if($result === false || !file_exists($filename))
    error_log('Failed to generate image for : ' . $text);
    exit(json_encode(['error' => 'Failed to generate image']));

function displayImage($path)
    header('Content-Type: image/png');

This should probably be rewritten to directly interface with the imagemagick library instead of invoking it through a shell script. The shell script approach will probably fail for concurrent requests, but then again I don't have enough traffic for that to happen. And even if I did, the cheap VPS that hosts... err I mean the microservice uuh pod that handles OG images will crash and burn either way.

To hide the URL of the API that handles image generation, I added a rule to next.config.js that redirects /imagify to the real URL :

  async rewrites() {
    return [
        source: "/imagify",
        destination: process.env.IMAGIFY_URL,

And replaced the meta tag URL with /imagify :


This way, the real URL isn't exposed to the public. Notice yet another Twitter hoop I had to jump through : I prepended /imagify with ${process.env.NEXT_PUBLIC_URL} because otherwise, the content attribute takes an absolute path when Twitter expects absolute URL. There was no straightforward way to do this in NextJS (that I know of), so I manually defined it as a public environment variable.

This concludes how I handled social media sharing previews with special considerations for right-to-left text and custom fonts. Thanks for reading and if you have any insights, feel free to drop a comment.


Posts les plus consultés de ce blog

Writing a fast(er) youtube downloader

My experience with Win by Inwi

Porting a Golang and Rust CLI tool to D