AI is changing the landscape for low-vision tech.
The next time you sit down at your dentist’s office, you might notice that the person next to you, rather than taking out their reading glasses to examine the office policies, takes out their phone instead. They snap a picture of the document and then have AI interpret it for them through their headphones–no glasses needed.
For people who are blind and for those dealing with age-related vision loss, AI provides surprising new ways to read the surrounding environment. App developers are taking note, and several new phone applications offer impressive image description and navigation services designed specifically for people with blindness or low vision.
As with all new AI technology, developers and users alike are concerned with accuracy, safety, privacy, the environmental toll of AI processing, and the potential for bias in AI-generated output. At the start of what seems to be an AI technological revolution, the disabled community is very much a part of the conversation. According to the developers’ websites, all the apps below were developed in close collaboration with blind people.
Translating a Picture into a Thousand Words
Be My AI is a new collaboration between Be My Eyes and OpenAI, the creators of ChatGPT. Since 2015, Be My Eyes has used video calls to connect people who are blind with volunteers. The volunteer provides the blind person with visual assistance by describing what they see in the video call and answering the person’s questions. While By My Eyes still engages volunteers, now Be My AI also enables blind people to upload photos of their surroundings and get back detailed, AI-generated descriptions, immediately.
Writer Milagro Costabel says she reserved her enthusiasm when she heard about the update. Previous AI-powered apps, like Seeing AI from Microsoft, had made similar promises about image description, but Costabel found that they delivered only bare-bones commentary. With ChatGPT4 powering Be My AI, the story changed.
“Suddenly, I was in a world where nothing was off limits,” she writes in Slate. “By simply waving my cellphone, I could hear, with great detail, what my friends were wearing, read street signs and shop prices, analyze a room without having entered it, and indulge in detailed descriptions of the food—one of my great passions—that I was about to eat.” Costabel also appreciates that with Be My AI, unlike with previous apps, she can get her questions about each image answered because she can chat with the AI in real time. She’ll take a picture of a menu at a restaurant, for example, and ask for a recommendation of a dish that meets her criteria, like something vegetarian under $20. The app will tell her the choices.
Be My AI has changed how Costabel interacts with the world, but she still has hesitations. “Artificial intelligence can be wrong,” she says, “and a blind person like me would have little way of noticing unless I knew in advance what was in an image.” She and other users also express concern about the privacy of the images they upload.
Be My Eyes includes the Be My AI feature, and it’s free for iPhone and Android. Seeing AI, “A Talking Camera for the Blind,” didn’t impress Costabel when she first tried it, but has since updated its AI technology. That app is also free and has positive reviews.
Note, too, that the free version of ChatGPT itself has features that people with low vision may find useful. ChatGPT can describe images and can chat with you in voice as well as text.
Navigating the City with AI
For pedestrians with low vision, crossing the street can be a logistical challenge. In New York City, a recent class action lawsuit brought by Disability Rights Advocates found that only 4% of the city’s intersections were accessible to pedestrians with low sight. While there’s no substitute for building accessibility into infrastructure, a few new apps are leveraging AI to make navigating the city much easier.
Lazarillo guides users through urban environments with detailed audio directions. With its guide dog logo, the app aims to be a digital companion. It enables users to search for businesses and locations around them and then offers spoken walking directions.
Google Maps also navigates pedestrians through the streets with audio, but Lazarillo goes a step further (so to speak). Listeners can take Lazarillo indoors and let it guide them through businesses, schools, museums, banks, and public transit. Its “indoor wayfinding technology” includes prompts like, “Front desk 20 feet in front of you.” However, only some businesses offer this indoor wayfinding. Lazarillo works with organizations to make their indoor spaces navigable through the app.
People who are blind and older adults with sight loss can learn to use Lazarillo through a series of tutorials on the website. It's free for both Android and iPhone and is available in 25 languages.
Another new app, Oko from the developer Ayes, focuses the power of AI on the problem of when to cross the road. Oko uses the phone’s camera to identify the walk signal on the opposite side of the street. When the app identifies the signal, it notifies the user through haptics, like a slow buzz for “Don’t walk,” and then alerts the person with a faster buzz when the signal changes to “Walk.” For people who have some vision, there’s also the option to display the signal on the phone instead. Like Lazarillo, Oko also provides voice navigation for your route, and it helps you find restaurants and businesses that meet your specified accessibility needs.
The company behind Oko makes it clear that the app isn’t meant to be a replacement for any of the tools that people with low vision already use to navigate: a guide dog, white stick, or mobility training. Rather, like other AI-powered apps, it’s another tool that blind people and those with low vision can keep in their toolbox.
Additional Sources:
Blog posting provided by Society of Certified Senior Advisors