Don’t worry, artificial intelligence isn’t any better than we are

Source – postandcourier.com

Facial recognition technology matched 28 members of Congress with photos from a database of about 25,000 criminal mugshots, according to a recent report from the American Civil Liberties Union. And that was apparently surprising.

Oh, the jokes write themselves.

Actually, none of the mugshots were of sitting public officials. The technology, developed by Amazon, turned up false positives about 5 percent of the time.

Being right 95 percent of the time isn’t too shabby. And as someone who struggles to put a name with a face embarrassingly often, I can sympathize with the Amazon software’s occasional mistakes.

The trouble is, law enforcement departments are starting to roll out facial recognition technology in crime fighting. Mistaking 5 percent of Congress for criminals is kind of funny. Mistaking 5 percent of the unsuspecting public for criminals is decidedly less so.

And, of course, there’s a racial component that is also not funny.

Turns out the Amazon software was more likely to match black and Latino members of Congress with mugshots than their white peers. Other studies have shown similar biases in facial recognition technologies.

It doesn’t take a lot of imagination to see how that could be problematic when deployed as part of a law enforcement strategy. Suffice it to say that Amazon’s facial recognition technology is probably not ready for prime time.

It’s worth noting that Amazon doesn’t recommend that its software be used to round up people on the street. It’s designed to help police departments narrow down lists of potential suspects. And it can also help pick out victims of human trafficking and find missing kids.

But the broader lesson is that, despite what feel like near-daily quantum leaps in technology, we humans are still going to have to rely on ourselves and our fellow non-robots for a while longer.

The Washington Post, for example, has published thousands of stories written by its Heliograf artificial intelligence system. The technology allows the paper to cover every Washington, D.C.-area high school football game, a task that would otherwise take an army of reporters.

But the AI-produced stories are not exactly going to win any Pulitzers anytime soon. They’re meant to free up journalists to cover juicier topics, not leave them unemployed. And nobody has figured out a very compelling way to teach computers to form their own opinions, so my job is safe from the robots — for now.

Facebook, which saw an eye-popping $120 billion in wealth disappear this week when its stock dropped, also faced some artificial intelligence-related controversy last year when a report found that its automated ad system let advertisers target people interested in topics like “how to burn Jews.”

Not great.

Similarly, a Twitter-based bot named Tay designed by Microsoft in 2016 was supposed to learn from its communication with human users in order to tweet like a more convincing teenage girl.

Apparently no one at Microsoft had even briefly perused Twitter, which is not a good place to learn much of anything — certainly not how to warmly interact with other humans. It only took a few hours for Tay to start tweeting “RACE WAR NOW.”

Ah, social media.

But AI mishaps aren’t always just humiliating.

Along the same lines as the concern over facial recognition technology, a Pro Publica study of an algorithm used to help judges predictively determine the most effective bail sentencing found that it recommended non-white suspects be detained until trial at higher rates than white defendants with similar profiles.

Software also regularly helps make automated decisions about hiring, customer service, lending and dozens of other matters. Humans can certainly be biased in those tasks too. But algorithms learn from data, which is collected and compiled by biased humans. If the data is biased, so is the algorithm.

And in at least one instance, artificial intelligence has killed a person.

An Uber vehicle in autopilot mode recently hit and killed a pedestrian in Arizona. According to a later investigation, one of the onboard systems designed to detect obstacles in the road saw the woman too late, and the emergency braking system had been disabled to help ensure a smoother ride.

In other words, for all our advances, artificial intelligence still isn’t much smarter or more effective than the humans who built it.

That’s not to say there aren’t some valuable applications. Facial recognition technology can save lives if used responsibly, for instance. Self-driving technology can help alert, attentive human drivers avoid crashes. Maybe one day bots like Tay will write poetry instead of racist rants.

But until then, we’ll still need to rely on humans, flawed and fallible as we are.

Ed Buckley is an editorial writer with The Post and Courier.

Related Posts

Subscribe
Notify of
guest
42 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
42
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence