AI needs a human touch to function at its highest level

Source – venturebeat.com

There is an old saying that speaks to the current state of AI: “To someone holding a hammer, everything looks like a nail.” As companies, governments, and organizations scramble to be in the vanguard of this new generation of artificial intelligence, they are trying their best to persuade everyone that all of our human shortcomings will be absolved by this technological evolution. But what exactly will it solve? Machine learning is an incredibly powerful tool, but, like any other tool, it requires a clear understanding of the problems to be solved in the first place — especially when those problems involve real humans.

Human versus machine intelligence

There is an oft-cited bit from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy series in which an omniscient computer is asked for the ultimate answer to life and the universe. After 7.5 million years, it provides its answer: the number 42. The computer explains to the discombobulated beings who built it that the answer appears meaningless only because they never understood the question they wanted answered.

What is important is identifying the questions machine learning is well-tailored to answer, the questions it struggles with, and perhaps most importantly, how the paradigmatic shift in AI frameworks is impacting the relationship between humans, their data, and the world it describes. Using neural nets has allowed machines to become uncannily accurate at distinguishing idiosyncrasies in massive datasets — but at the cost of truly understanding what they know.

In his Pulitzer Prize-winning book, Gödel, Escher, Bach: an Eternal Golden Braid, Douglas Hofstadter explores the themes of intelligence. He contemplates the idea that intelligence is built upon tangled layers of “strange loops,” a Möbius strip of hierarchical, abstracted levels that paradoxically wind up where they started out. He believes that intelligence is an emergent property built on self-referential layers of logic and abstractions.

This is the wonder that neural nets have achieved — a multi-layered mesh of nodes and weights that pass information from one tier to the next in a digital reflection of the human brain. However, there is one important rule of thumb in artificial intelligence: The more difficult it is for a human to interpret and process something, the easier it is for a machine, and vice versa.

Calculating digits of π, encrypting messages using unimaginably huge prime numbers, and remembering a bottomless Tartarean abyss of information can occur within the blink of an eye using a computer, which manages to outperform millennia of human calculations. And yet humans can recognize their friend’s face in an embarrassing baby photo, identify painters based on brush strokes, and make sense of overly verbose and ruminating blog entries. These are domains that machine learning has made vast improvements in, but it is no wonder that as the human brain-inspired architecture of neural nets brings machines up to parity, and in some cases beyond, in areas of human cognition, machines are beginning to suffer some of the same problems humans do.

Nature or nurture?

By design, we are unable to know what neural nets have learned, and instead we often keep feeding the system more data until we like what we see. Worse yet, the knowledge it has “learned” is not discrete principles and theories, but rather contained in a vast network that is incomprehensible to humans. While Hofstadter might have contemplated artificial intelligence as a reflection of human intelligence, modern AI architects have no tendency to share the same preoccupation. Consequently, modern neural nets, while highly accurate, do not elucidate any understanding of the world for us. In fact, there are several well-publicized instances where AI went afoul, manifesting in a socially unacceptable reality. Within a day of Microsoft’s AI chatbot Tay being active, it learned from Twitter users how to craft misogynistic, racist, and transphobic tweets. Did Tay learn a conceptual sociohistorical theory of gender or race? I would argue not.

Why AI can’t be left unattended

Paradoxically, even if we assume that the purpose of an AI isn’t to understand human concepts at all, these concepts often materialize anyway. As another example of misguided AI, an algorithm was used to predict the likelihood of someone committing future crimes. Statistically based software models learned racial biases, assigning higher risks to black defendants with virtually no criminal records, if any, than to white defendants with extensive histories of violent crime. Facial recognition software is also known to have its biases, to the point that a Nikon camera was unable to determine if a Taiwanese-American woman had her eyes open or not. Machine learning is only as good as the data it is built upon, and when that data is subject to human biases, AI systems inherit these biases. Machines are effective at learning from data, but unlike humans, have little to no proficiency when it comes to taking into account all the things they don’t know, the things missing from the data. This is why even Facebook, which is able to devote massive AI resources to its efforts to eliminate terroristic posts, concedes that the cleanup process ultimately depends on human moderators. We should be rightfully anxious about firing up an AI, whose knowledge is unknowable to us, and leaving it to simmer unattended.

The AI community cannot be haphazard about throwing open the AI gates. Machine learning works best when the stakeholders’ problems and goals are clearly identified, allowing us to chart an appropriate course of action. Treating everything as a nail is likely to waste resources, erode users’ trust, and ultimately lead to ethical dilemmas in AI development.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence