How Google Is Using AI To Make Voice Recognition Work For People With Disabilities

Source:- forbes.com

Want to schedule an appointment? Just ask your phone. Need to turn on your bedroom lights? Google Home has you covered.

Now a $49 billion market, voice-activated systems have gained popularity among consumers, thanks to their ability to automate and streamline mundane tasks. But for people with impaired speech,  technologies that rely on voice commands have proved to be far from perfect.

That’s the impetus for Google’s newly formed Project Euphonia, part of the company’s AI for Social Good program. The project team is exploring ways to improve speech recognition for people who are deaf or have neurological conditions such as ALS, stroke, Parkinson’s, multiple sclerosis or traumatic brain injury.

Google has partnered with nonprofit organizations ALS Therapy Development Institute and ALS Residence Initiative (ALSRI) to collect recorded voice samples from people who have the neurodegenerative disease, one that often leads to severe speech and mobility difficulties.

For those with neurological conditions, voice-activated systems can play a key role in completing everyday tasks and conversing with loved ones, caregivers or colleagues. “You can turn on your lights, your music or communicate with someone. But this only works if the technology can actually recognize your voice and transcribe it,” says Julie Cattiau, a product manager at Google AI.

The company’s speech recognition technology utilizes machine learning algorithms that require extensive data training. “We have hundreds of thousands, or even millions, of sentences that people have read—and we use them as examples for the algorithms to learn how to recognize each,” says Cattiau. “But it’s not enough for people with disabilities.”

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence