Why AI isn’t nearly as smart as it looks
The intention of this article is not to attack artificial intelligence as a technology.
Apart from the dangers inherent in every revolutionary development, AI has a vast and growing set of benefits, unleashing cascades of advances in the most varied fields of science and technology, freeing people from mental drudgery, making life more convenient and the economy more productive.
That being said, AI still has a long way to go. Given the colossal, unprecedented scale of effort and creativity invested the development of AI systems, and embodied in its databases, it is not clear that AI has produced more intelligence than humans have put into it.
There is a problematic side to artificial intelligence, which I shall refer to under the general heading of “stupidity.” Here I intend the term “stupidity” to be understood in an analytical, not a pejorative sense. In humans, at least, stupidity and intelligence – even great brilliance – do not exclude each other. They often coexist, as experience teaches us.
In this and the following articles I intend to address the AI stupidity problem in three interrelated dimensions:
A. Inherent weaknesses of computer-based systems designed according to the principles of artificial intelligence as presently understood, which make them stupid in a sense somewhat analogous to stupidity in human beings. The stupidity of AI systems is a root cause for most of the well-known problems and risks connected with their use in real-life situations.
B. The stupidity of AI pioneers and most of their successors – despite great intellectual brilliance – in embracing the notion that human cognition is fundamentally algorithmic in nature; or is ultimately based on processes of an algorithmic type. Likewise the stupidity of adopting digital computing devices as a chief paradigm for understanding the human mind and the brain as a biological organ.
These gratuitous assumptions, having no scientific basis, energized some of the early work on AI systems; but in my view they greatly hamper its further development. As a result, not only AI per se, but a large part of what is nowadays called “cognitive science” became trapped in the tiny, flat world of combinatorics and formal mathematical models.
C. Stupidity induced in human beings by their interactions with AI systems and by the impact of the so-called “cognitive revolution” and its philosophical precursors on language, education, science and culture in general.
The danger is not that AI systems will become more intelligent than humans. Rather, people may become so stupid that they can no longer recognize the difference. Here the same rule applies to specifically human forms of cognition, as to exercising our muscles: “use it or lose it”!
What is stupidity?
It is difficult to nail down the meaning of “stupidity” in a manner that would fit present-day academic criteria. Notwithstanding, the phenomenon of stupidity has been a focus of attention in human culture from the earliest known times, reflected in ancient oral traditions, in teachings of elders and wise men, in countless fables, stories and anecdotes.
From the earliest times, metaphorical humor and irony were cultivated as countermeasures to stupidity. One might rightly speculate that the human race would not have survived without them. Although the “database” has greatly expanded in the course of history, it is doubtful whether people today have more insight into the nature of stupidity than they had 3000 years ago.
That being said, recent decades’ literature on the problem of stupidity contains useful characterizations, which I draw from here.
Typical problem: A company has highly intelligent, capable employees and a seemingly excellent management. After some initial successes it fails and goes bankrupt, as a result of persistent mistakes, misjudgments and poor performance. What happened?
Study of such cases has led scholars and management consultants to the concept of “functional stupidity.” (See for example The Stupidity Paradox – The Power and Pitfalls of Functional Stupidity at Work by Mats Alvesson and André Spicer.)
Although “functional stupidity” generally refers to an institutional context, it sheds light on the whole spectrum of stupidity, from that of individuals to entire societies.
The following points are my attempt to capture the essence of stupidity in four points. They do not apply in exactly the same way to each of the dimensions A, B and C identified above, but the analogies should become clear as I go on:
1. Continued adherence to existing procedures, habits, modes of thinking and behavior, combined with an inability to recognize clear signs that these are inappropriate or even disastrous in the given concrete case. Rigid adherence to past experience and rote learning in the face of situations that call for fresh thinking. One could speak of blindly “algorithmic” behavior in the broadest sense.
2. Inability to “think out of the box”, to look at the bigger picture, to mentally jump out of the process in which one is engaged, and pose overreaching questions such as, “What am I really doing?” and “Does it make sense?” and “What is really going on here?”
3. A tendency to greatly overestimate the efficacy of adopted strategies and methods for dealing with a new problem (for example the Dunning-Kruger effect in humans, typically exhibited in AI utilizing statistical methods of optimization).
4. Lack of ability to grasp the meaning and significance of statements, situations and events – a deficit typically referred to in humans by expressions such as, “This person is too stupid to understand ….”
Stupidity in AI systems
Not surprisingly, with the spread of AI to practically all sectors of society and economy, and growing dependence on AI systems, concern has grown about the consequences of AI failures.
Beyond straightforward errors (such as false identification of objects), malperformance of AI systems can lead to unwanted and harmful consequences in areas where they are supposed to substitute for human judgment.
That includes choosing between alternative courses of action – for example while driving a car in a potential accident situation, or responding to military threats with short warning times – as well as in tasks like processing job applications and designing medical treatments.
The subject here is not whether AI systems perform better or worse than humans in a given case. It is, rather, stupidity as a systemic problem, embracing both the human dimension and that of AI systems.
These are often intertwined, and it is an interesting issue, with legal implications, how responsibility for consequences for AI failures is to be shared among the designers, the database suppliers and the managers and users of the system.
AI systems fail and malperform in a great variety of ways. (See Classification Schemas for Artificial Intelligence Failures by P J Scott and R V Yampolski.) Few analysts have sought to identify a common denominator in a rigorous manner.
But I am sure that many who have dealt with AI systems long enough – not least those who, often with great insight and ingenuity, have sought to remedy the failings of AI systems and improve their performance – will be able to draw analogies to stupid behavior of human beings.
They have had to invest vast amounts of additional human intelligence into AI systems in the effort to make them less stupid.
Next is Part 2: How algorithmic mechanics hold back today’s AI systems
Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.