Shopping for groceries is a common activity for many of us, but for visually impaired people, identifying grocery items can be daunting. A team of researchers from the National University of Singapore’s (NUS) School of Computing (NUS Computing) introduced AiSee, an affordable wearable assistive device that helps people with visual impairment ‘see’ objects around them with the help of Artificial Intelligence (AI). Individuals with visual impairment face daily hurdles, particularly with object identification which is crucial for both simple and complex decision-making. While breakthroughs in AI have dramatically improved visual recognition capabilities, real-world application of these advanced technologies remains challenging and error-prone. It was first developed in 2018 and progressively upgraded over a span of five years. The team aims to overcome these limitations by leveraging state-of-the-art AI technologies. The team aims to empower users with more natural interaction. By following a human-centered design process, we found reasons to question the typical approach of using glasses augmented with a camera. People with visual impairment may be reluctant to wear glasses to avoid stigmatization. Officials are proposing an alternative hardware that incorporates a discreet bone conduction headphone. The user simply needs to hold an object and activate the built-in camera to capture an image of the object. With the help of AI, it will identify the object, and it will also provide more information when queried by the user. It incorporates a micro-camera that captures the user’s field of view. This forms the software component, also referred to as the ‘vision engine computer’. The software is capable of extracting features such as text, logos, and labels from the captured image for processing. After the user snaps a photo of the object of interest, it utilizes sophisticated cloud-based AI algorithms to process and analyze the captured images to identify the object. The user can also ask a range of questions to find out more about the object. It employs advanced text-to-speech and speech-to-text recognition and processing technology to identify objects and comprehend the user’s queries.
Powered by a large language model, it excels in interactive question-and-answer exchanges, enabling the system to accurately comprehend and respond to the user’s queries in a prompt and informative manner. Compared to most wearable assistive devices which require smartphone pairing, it operates as a self-contained system that can function independently without the need for any additional devices. The headphone utilizes bone conduction technology, which enables sound transmission through the bones of the skull. This ensures that individuals with visual impairment can effectively receive auditory information while still having access to external sounds, such as conversations or traffic noise. This is particularly vital for visually impaired people as environmental sounds provide essential information for decision-making, especially in situations involving safety considerations. At present, visually impaired people in Singapore do not have access to assistive AI technology of this level of sophistication. The team is currently in discussions with SG Enable in Singapore to conduct user testing with persons with visual impairment. The findings will help to refine and improve its features and performance. In addition, a private company has gifted S$150,000 to support the project. SG Enable also seeks to collaborate with NUS to explore how AI, human-computer interface, and assistive technology can give persons with disabilities more technological options.
Source: National University of Singapore