HOW IS ARTIFICIAL INTELLIGENCE TRANSFORMING THE LIVES OF PEOPLE WITH DISABILITIES?
Leveraging Artificial Intelligence to Create Impressive Products for Disabled People
Technology is an excellent way to enhance the lives of people with disabilities. With the advent of artificial intelligence, several avenues of research have opened up that focus on enhancing the lives of people with impairment.
For instance, Facebook has designed an AI tool that can help the blind “see” again. This AI model explains the images on the Facebook feed of a blind person, so the person using the screen reader gets an idea of what is going on in the picture. This means people with visual impairment no longer have to hear a screen reader say “Photo” by “John Doe.” Google’s ‘Look to Speak’ app uses machine learning and computer vision to allow users to control their devices with their eyes
Similarly, OrCam, a Jerusalem-based company, has developed an AI-based called OrCam Read. This handheld device can read full pages or screens of text aloud from any printed or digital surface, including newspapers, books, product labels, and computers and smartphones. Through this device, OrCam aims to help people with reading challenges, such as dyslexia, mild to moderate vision loss, reading fatigue, as well as for those who read large volumes of text.
Even company giants like Microsoft have started a five-year program called ‘AI for Accessibility,’ with an investment of US$25 million, aiming to put AI in the hands of developers to make the world more accessible by providing AI solutions for the specially-abled. Artificial intelligence not only assists people with physical disabilities but is also helping people struggling with learning problems and mental health issues. E.g., Microsoft’s Windows Hello uses biometric login, i.e., fingerprint, face, or iris, which can work for people with physical disabilities or those with dyslexia who might struggle to remember passwords. AI chatbots like Woebot and Wysa are ensuring the availability of consultation for mental health woes, beyond the therapist hours 24/7.
Meanwhile, people suffering from epilepsy can have seizures from blinking lights and animations. This is why accessiBe, a web accessibility platform enables epileptic users to disable various types of animation, such as GIFs and videos so that they can browse the web without complications. Voiceitt is an app for people with speech impediments, including both those who need it temporarily after strokes and brain injuries, and those with more long-term conditions like cerebral palsy, Parkinson’s, and Down’s syndrome. The app uses machine learning to pick up speakers’ unique speech patterns, recognize any mispronunciations, and rectify them before creating an audio or text output. Livio AI, developed by Starkey, an AI medical device company, is a hearing aid that will enhance the hearing experience by quieting all the external noise from the environment and tracking health-related data to enable patients to seek help during emergencies.
Thanks to artificial intelligence, autonomous vehicles also promise to provide people with disabilities more mobility than ever before. Once the self-driving vehicles are fully integrated into society, they can be a resourceful asset for people with different disabilities, including motor impairment. These people would no longer be dependent on other people or public transport.
Further, most of the existing testing methods are highly ineffective at pinpointing learning disabilities like dyslexia or dyscalculia. Artificial Intelligence can help teachers and healthcare professionals diagnose early signs of such conditions and help the students accordingly. For instance, Australian startup Dystech has developed a screening app for early detection of such learning disorders.
Built on Amazon Web Services (AWS), Dystech employs artificial intelligence and machine learning to screen test if the user has dyslexia or dysgraphia. For the former, the app uses datasets of audio recording from both dyslexic and non-dyslexic adults and children to train the AI and relies on users reading aloud words that appear on the screen while being recorded using their smart device during assessment . And for dysgraphia it uses a photo of a handwritten text for screening. After subjected to a 10-minute screening test, app informs users about their likelihood of having dyslexia or dysgraphia.