Why you shouldn’t rely on AI for your alt text
AI visual recognition is everywhere these days. If you’ve ever deposited a check using a banking app, identified a plant or bird using a reverse image search, or unlocked your phone by holding it in front of your face, then you’ve experienced the power of this rapidly developing (and sometimes controversial) technology.
But there are some applications of visual recognition AI that just aren’t there yet — and maybe never will be. One of those is using AI to generate alt text. Alt text is the written text that describes the content and meaning of an online image, which is embedded into code of the webpage where the image is displayed. This text is read aloud by screen readers, which many people with visual impairments use to navigate the web and other digital experiences. Given that images are increasingly used to convey meaning and information online, alt text is a critical aspect of accessible design.
The good news is that awareness is spreading around the importance of alt text for people with visual impairments. The not-so-good news is that many companies are relying on AI to generate that text, instead of having humans do it — and the results are, well… you can see for yourself.
We shared a few examples of AI-generated alt text during our game show “Who Wants to Be an Accessibility Champion?,” which brought together three accessibility advocates from across the digital world to help us showcase key accessibility concepts, including AI alt text as well as auto-generated captions.
Alt text examples
Here’s one of those examples, with a photo you may be familiar with…
Here’s what Facebook’s AI alt text generator came up with: May be an image of 4 people, people standing and outdoors.
Our accessibility experts, on the other hand, came up with the following suggested alt text for this image: A man walks with a woman. He turns to look at another woman, his eyes narrowed and lips pursed as if whistling. The woman he is walking with looks at him with wide eyes and an open frown.
There’s no contest. Our human-written alt text describes the image with detail and precision, conveying not only the fact that there are people in the image, but what they’re doing. From the human-authored description, someone encountering the image with a screen reader would be able to understand the story the image is telling.
As for the Facebook AI alt text? It’s so vague that it could describe an infinite number of scenarios. Does the image depict four bridesmaids at a beachside wedding? Four soldiers on the deck of an aircraft carrier? There’s no way to know. (Meanwhile: Where did Facebook come up with “four people”? There are at least nine if you count the blurred figures in the background — but only the three people in the foreground matter.)
Perhaps in the future, AI visual recognition will advance to the point where it can generate detailed, nuanced descriptions of images and their meaning. But until then, if you want to ensure that the images on your website or app can be fully understood by all users, it’s better to take the time to craft your own alt text.
Good alt text should:
- Convey meaning
- Describe the most important details first
- Be concise
- Jump right into the description — don’t start with phrases like “A photo of…” or “an icon of…”
Alt text resources
For more on alt text best practices, check out Accessibility Tips for Social Media. Or, if you’re ready to help your team build comprehensive digital accessibility expertise, check out our roles-based, tailored training programs.