Can AI be cutting-edge in the geopolitical scenario?
It all started when Alan Turing asked himself, in 1950, if machines could just think. Isaac Asimov’s novels – creator of the famous three laws of robotics -the myths of ancient Greece and other anecdotes from past centuries show the same question has been around the minds of scientists and the general public much longer.
Six years after Turing’s question, Marvin Minsky and his colleagues used the term “artificial intelligence” (AI) for the first time. This technological concept is not alien to any developed country today, nor its application to the most advanced companies.
The utopia of having a non-organic intelligent agent obeying orders has also caught the attention of Defense ministries around the world. In 2017, China released its strategy to position itself at the forefront of AI research. A year later, the United States assigned 2,000 million dollars to the advancement of this technology. Countries such as Russia, Japan and the United Kingdom have also joined in making great contributions to this global pulse, which has created a widespread feeling of a “new arms race” that once again passes through universities, private companies, and governments.
John McCarthy, one of the pioneers in the world of AI, defined it in 1956 as “the science of creating intelligent machines”. Although the definition of intelligence is controversial, early AI scientists proposed language as a way to channel and manifest it. One of them, Turing, also famous for decoding the WWII Enigma machine, which allowed the allies to decipher Nazi communications and win the war, developed a famous test by which a “smart” machine would be considered if it was able to converse with a human without it recognizing that its interlocutor was a robot, called the “Turing Test”.
We shall note that both humans and machines need high doses of information to understand what is happening around us. Deprived of meaning, machines are able to represent the outside world from data packets or datasets. The content of these data is vital for the construction of the artificial “mind” and the cognitive and moral characteristics of the mathematician or developer who develops the algorithm, since the AI consciousness depends on them. In other words, the mathematician plays the role of father or mother and data educates the machine. That presents a problem: Developer’s biases may end up getting implemented into AI, which could take on racist leanings.
The future of arms race?
The possibilities of minimizing human risks and maximizing effectiveness in a conflict scenario make armies the first ones interested in betting on AI. In fact, Russian leader Vladimir Putin has even ruled that “whoever leads the race for AI will rule the world.” AI applications in the world of security and defense are ever-growing: they can accelerate the identification of suspects through their ability to find patterns and select images – image recognition, train military personnel in a specific environment by simulating scenarios, reinforce the resilience of computer systems, reduce the number of human soldiers on the battlefield and expand the precision of military weaponry in a tremendous scale.
Hence, autonomous weapons are one of the most visible faces of this new generation. Defined by the United States Department of Defense as systems that can “select and interact with a target, without the intervention of a human operator”, they are especially useful in reconnaissance or patrol missions abroad. Their ability to get closer to the target also makes them very suitable for dangerous or long-term missions, reducing all the risks derived from the human species’ own needs, such as fatigue, stress, fear and moral dilemmas as well as the risk of losing sensitive information if a person is captured. In addition, AI would learn from the environment and process information about it, increasing the degree of success in the mission.
At the end of Barack Obama’s presidency in October 2016, the White House released a report outlining the risks and opportunities of AI for the American economy and homeland security. Following Trump’s winning, the White House’s bid for AI seemed to be reduced until February 2019, when a series of measures were announced to maintain the United States’ leadership in AI. A few months earlier, the Pentagon launched the AI Next program, with an investment of close to $ 2 billion.
Chinese efforts in this area are a natural continuation of the Made in China 2025 plan, a strategy that aims to make China a leading technology country. Due to the number of patents and most cited articles, China already surpasses the United States, although it lacks researchers who continue to promote its industry.
The way forward
The development of cutting-edge technology is usually reserved for well-to-do countries, which are able to afford large investments in R&D, which are also the first ones to benefit economically. It is what could lead to “data colonialism”, as Israeli author Yuval Noah Harari has named it: “a new and uneven way of interacting between states, in which companies would collect data from countries with less developed privacy laws, to process them in countries where AI is available and apply for the benefits there.”
It is undeniable that AI will penetrate our daily life, in fact, it already is. So it is worth asking what the objective of your research is, weighing the consequences and ensuring that the procedure follows a logical and ethical line. This is probably one of the points that should concern us: how imperfect algorithms can find their place in places as important as armies, law firms, and police stations. Supervising that it is always a human responsibility, and not an AI factor.
The decision-making process responsibility is one of the pending tasks to be regulated towards the future development and implementation of AI.