Source – fortune.com
When artificial intelligence works smoothly, computers are able to spot cats in photographs at lightning-fast speeds. When it goes wrong, they can confuse images of turtles with rifles.
Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG, +0.66%) software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile.
The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs.
For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable.
All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment.
After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software.
Sure enough, Google’s software thought the turtles were rifles.
MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference.
Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help.
But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something.
The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world.
Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft (MSFT, +0.46%) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB, -0.40%) and Amazon (AMZN, +0.86%) would also likely blunder, he speculates, because of their similarities.