Source – firstpost.com
The notion of artificial intelligence is something which has long excited technological society. Among the various stories constructed around it are (as in the film Terminator) those of robots ruling the world with humans fighting a losing battle against them. Gary Kasparov, the world chess champion, was matched against an IBM supercomputer named Deep Blue in 1996 and 1997 under tournament conditions and lost in the 1997 rematch. Since chess is, in the popular imagination, the height of intellectual prowess, this created quite a stir and it was anticipated by the popular press that humankind would eventually have to make way for a greater intelligence — one which it had itself created.
‘AI’ is a fairly broad term which includes a number of unglamorous capabilities that fall far short of defeating a reigning chess champion. Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and the Chinese game Go), self-driving cars, military simulations, and interpreting complex data. Tests have been devised to determine whether a machine can actually claim to have ‘intelligence’ and among them are: a) conversation between a human being and two unseen ‘people’ out of which one is the machine to see if the machine can pass off as human; b) passing an examination that a college student would be required to pass; c) working at an important job and do at least as well as a human; d) following instructions to assemble furniture.
As may be evident from the above tests what is happening is that the term ‘intelligence’ is being used to mean the capacity to solve problems. Chess itself it related to problems within a set of parameters. The machine is simply fed an enormous amount of information about past games and it uses it to decide on the right moves, which would be very different from the way a human mind would work — anticipating a limited number of future moves without considering those that are irrelevant. Where a human being would instinctively rule out possibilities and only consider a few, the machine, which is much faster but lacking in this kind of ‘judgement’, might consider every possibility without ruling out any — but still win because of the sheer advantage of its computational speed over the human’s.
This brings us to the fundamental issue of whether a machine can be endowed with ‘general intelligence’. If one were to go only into the fundamental philosophical problem here (enunciated by John Searle) it is the difference between something which has a mind and thinks like human being and one which acts in a manner which would make one to believe that it was thinking like a human. As an instance of the latter, Deep Blue, in its rematch with Gary Kasparov, had been programmed to mimic human weakness — take more time and ‘fumble’ — as part of its ‘psychological equipment’ against the champion. While this distinction has generally been put aside, scientists identifying ‘acting like it has a mind’ with ‘it has a mind’, I would suggest that it is the distinction which distinguishes AI from human intelligence.
If we consider ‘intelligence’ as a notion the commercialization of all activity has led to a need for its measurement and IQ (the most common way of measuring intelligence) is identified with the capacity to solve problems — which is the way AI is also regarded. But the issue here is whether the human mind can be valued only for it capacity to solve problems. A ‘problem’ is limited to a domain but human minds have functioned outside clearly defined domains and their contributions have rested on the nebulousness of their achievements, which are often appreciated only after individuals die. As instances it is difficult to define a great painting or a great musical composition as the solution to a ‘problem’. Even a mathematical theorem is arrived at intuitively and is not an answer to a defined problem.
One of the achievements of AI is to make predictions in situations the way a human being would but unlike the human, the AI would use statistical methods to guess the outcome of an event taking place. But let us, as an example, take a kind of understanding that a human being has in everyday situations which an AI would not have. It is not impossible, I propose, for a person with some worldly experience to guess from someone’s body language whether he/she is speaking the truth, especially when that person is known. To enable an AI to repeat this feat it would need to be fed information on elements of body language and the pertinent situation, which might be impossible to define in clear data units.
The achievements of the human mind in history are far too complex and far-ranging for the mind to be defined in terms of a problem-solving, measurable intelligence. It can be argued that the greatest of these achievements were, in fact, a response to the great unknown, which is the universe, rather than attempts at overcoming immediate problems. It is not that the wheel and the electric bulb were not great inventions but that humanity reached a level of sophistication when its thinking was freed from having primarily to attend to overcoming obstacles; it began to reflect and speculate about matters which served no immediate purpose like the creation of art, music and advanced mathematics. Religious inquiry/philosophy is another area which does not solve practical problems but it is nonetheless central to human existence. The ‘advances’ may not have made existence more comfortable to humankind but the embellishments they introduced into human existence transformed it. Compared to many of these ‘useless’ achievements, the creation of AI is perhaps even a minor one.
One might gather from the aforesaid that many of the extraordinary things that the human mind has achieved in history proceeds from a sense that the world cannot be completely known but that ‘progress’ can still be made; nothing is certain but there is still knowledge. In the process of trying to know (or speculate rigorously) about the unknowable, mankind has perhaps tried to reconstruct itself to resemble the ‘divine’. If God created the world, humankind creates an alternate one that approaches it in richness, a world not constituted by atoms, planets and living beings but by pictures, music and language.
A 2015 survey among leading AI researchers claimed to reveal what the future might have in store for humankind. Among the changes anticipated were the likelihood of disasters and accidents becoming things of the past, human intelligence being augmented with implants to improve productivity, prior predictions of the success of relationships, the human-to-human interface being reduced because of the intervention of machines and the relieving of humans of the burden of meaningless work. On the downside, it was felt likely that the advantages would not benefit humankind as a whole but only those who could afford machines, with human workers even losing jobs. But there was a general feeling that there was little that humans could do that the computer could not do better. In time AI would itself go on to improve what it was capable of and progress might become exponential.
In the midst of the debate about how AI could outperform humans in every field the important fact that is being lost sight of it is that, increasingly, human capability is being defined more and more narrowly in utilitarian terms. Something which is not materially useful is judged unnecessary to do; none of the researchers interviewed in the survey ever wondered if humankind’s goals should not be like ‘tasks’ assigned to a machine. All talk about AI centres around a mechanistic approach to human existence and it is as though humankind need not look beyond containing disasters, prolonging life, improving productivity and processing information etc. But are these ‘tasks’ enough to define what it means to be human or do we have to look beyond it at aspects which might even venture into the domain of metaphysics? It would seem that, as if to prepare for the advent of the AI explosion, intelligent humankind is reducing itself to the level of robots. AI might one day outdo human mental capabilities but by that time human capability would itself be redefined to abandon most of what it has valued.