Louise Plunkett, residing in Norwich, states that “AI has revolutionised my daily life.” Ms. Plunkett lives with Stargardt disease, a rare genetic eye condition leading to progressive vision loss, which she describes as something that “impacts everything I do.” She elaborates, “I can’t recognise people, even my own husband or my children. When my children were younger, I used to have to teach them how to come to me when I met them at the school playground.” Ms. Plunkett is proficient with digital tools; her company provides guidance to businesses on making their online content accessible to the visually impaired. For many years, she has utilized services such as Alexa, Google Home, and Siri for tasks like setting alarms and checking weather forecasts. Currently, she finds an assistant named Be My AI to be beneficial. This application employs ChatGPT to generate and then verbally deliver detailed descriptions of images. Ms. Plunkett comments, “I’m quite a stubborn person,” adding, “I don’t like asking for help or admitting I need help, so using the AI tool is useful for things when other humans aren’t around.” She mentions using it for identifying female restrooms, reading food packaging ingredients, or deciphering letters. However, she perceives AI as sometimes inconsistent. “The downside with AI is that sometimes it gives you too many details. You sometimes just want the basic information of what is in front of you, but it will go above and beyond, and offer up mood and emotions.” She gives an example: “For example, it might say ‘a swirling carpet evoking memories of times gone by’. It feels like it is one step too far.” Be My AI was developed by the Danish company Be My Eyes. Its initial service connected human volunteers with clients, where volunteers would describe surroundings to individuals with vision problems via mobile phones. Nevertheless, Jesper Hvirring Henriksen, chief technology officer, reports that some of its 600,000 users are transitioning to their AI tool for assistance. He notes, “We have a woman who was one of our first users 10 years ago, and within the first six months [of releasing Be My AI], she did more than 600 image descriptions.” Mr. Henriksen is also observing novel ways people are employing the app. He states, “We’re finding people using it to check pictures that have been sent to them on WhatsApp groups,” and adds, “Maybe they’re not going to call another human each time to ask them about a picture sent on a WhatsApp group, but they use AI.” Regarding future advancements, he suggests that live streaming video, where the technology describes nearby buildings and movements, could be a future direction. “This is going to be a gamechanger. It’s like having a little person in your shirt pocket all day telling you what is going on.” Be My Eyes, which is offered free to users, generates revenue by enrolling companies in its paid directory service, allowing them to provide information and contact details to the blind and low-vision community. Mr. Henriksen asserts that AI will not eliminate the necessity for human interaction. “At Be My Eyes, people are still choosing to call a volunteer too. The blind population in the Western world are generally not young when they start to experience vision loss… it’s more skewed towards the elderly population and this [AI] might add a later extra of complexity. Humans are faster and potentially more accurate.” Other companies also offer products designed to aid the visually impaired. WeWalk, for instance, is an AI-powered cane featuring a voice assistant that detects obstacles and provides accessible navigation and real-time public transport updates. This cane connects to a smartphone application with integrated mapping, enabling it to inform users about points of interest, including the closest café, in over 3,000 cities. Gamze Sofuoğlu, WeWalk’s product manager, emphasizes, “The cane is very important for us, it helps navigation and is a very important symbol as it shows our independence and automacy.” She explains, “Our latest version helps users navigate the cane through voice commentary, for example when say take me home or the nearest café it can starts navigating, and you can get information about public transport. You don’t need to touch your phone. It provides freedom for blind and low vision people.” Ms. Sofuoğlu, who is blind herself, confirms her use of the cane during recent visits to cities like Lisbon and Rome. Robin Spinks, head of inclusive design at the RNIB (Royal National Institute of Blind People), who also has low vision, is a strong proponent of AI and uses it almost daily. For example, he utilizes ChatGPT to support his workflow, obtaining summaries of developments in specific work-related areas or even planning a paddleboard trip, and employs the Google Gemini AI tool to help locate items. He states that the previous year was dominated by conversational AI and ChatGPT. He now contends that 2024 marks the year of what he terms “multimodal AI.” He further elaborates: “That might be showing video and images, and being able to extract meaningful information and assist you in an exciting way.” He highlights Google Gemini, noting, “For example, with that you can record meetings and it assists with you voice labels and an account of a meeting, it’s genuinely helpful and it’s about making people’s lives easier.” Mr. Spinks declares that AI has been transformative for individuals with blindness or low vision. He concludes, “I sympathise with people who are genuinely scared of AI but when you have a disability, if something can genuinely add value and be helpful that has to be a great thing. The benefits are too great to ignore.”

Leave a Reply

Your email address will not be published. Required fields are marked *