Google shows off far-flung A.I. research projects as calls for regulation mount
Artificial intelligence and machine learning are crucial to Google and its parent company Alphabet. Recently promoted Alphabet CEO Sundar Pichai has been talking about an “AI-first world” since 2016, and the company uses the technology across many of its businesses, from search advertising to self-driving cars.
But regulators are expressing concern about the growing power and lack of understanding about how AI works and what it can do. The European Union is exploring new AI regulation, including a possible temporary ban on the use of facial recognition in public, and New York Rep. Carolyn Maloney, who chairs the House Oversight and Reform Committee, recently suggested that AI regulation could be on the way in the U.S., too. Pichai recently called for “clear-eyed” AI regulation amid a rise in fake videos and abuse of facial recognition technology.
Against this backdrop, the company held an event Tuesday to showcase the positive side of AI by showing some of the long-term projects the company is working on.
“Right now, one of the problems in machine learning is we tend to tackle each problem separately,” said Jeff Dean, head of Google AI, at Google’s San Francisco offices Tuesday. “These long arcs of research are really important to pick fundamental important problems and continue to make progress on them.”
While most of Google’s projects are still years out from broad use, Dean said they are important in moving Google products along.
Here’s a sampling of some of the company’s more speculative and long-term AI projects:
Google’s D’Kitty is a four-legged robot that the company says learned to walk on its own by studying locomotion and using machine learning techniques. Dean said he hopes Google’s research and development findings will contribute to machines learning how physical hardware can function in “the real world.”
Using braided electronics in soft materials, Google’s artificial intelligence technology can connect gestures with media controls. One prototype showed sweatshirt drawstrings that could be twisted to adjust music volume. The user could pinch the drawstrings to play or pause connected music.
A new transcription feature in Google Translate will convert speech to written transcript and will be available on Android phones at some point in the future. Natural language processing, which is a subset of artificial intelligence, is “of particular interest” to the company, Dean said.
Google Translate currently supports 59 languages.
Detecting eye diseases
Google Health announced new research Tuesday, showing that when the company’s AI is applied to retinal scans, it can help determine if a patient is anemic. It can also detect diabetic eye diseases and glaucoma, Dean said. The company hopes to analyze other diseases in the future.
Tracking endangered species
Google is using sensing tools to track underwater sea life. Using sound detection and artificial intelligence, the company said it can now detect orcas in real time and send messages to harbor managers to help them protect the endangered species.
Google announced Tuesday that it’s teaming up with organization DFO and Rainforest Connection to track critically endangered Southern Resident killer whales in Canada. The company’s also in the early stages of working with the Monterey Bay Aquarium to help detect species in the ocean nearby.
Sign language detection
Google’s working on a project called MediaPipe, which analyzes video of bodily movements including hand tracking. Dean said the company hopes to read and analyze sign language.
“Video is the next logical frontier for a lot of this work” Dean said.
All in all, the day resembled a science fair more than anything else, but it helped make Google’s point that artificial intelligence can have useful real-world applications.