How can humans maintain control over AI?

Source – the-japan-news.com

The Yomiuri Shimbun

Expectations for artificial intelligence are expanding, and people are making more and more attempts to use AI in a variety of areas, from finance and medical care to manufacturing. However, some are sounding warning bells and say that AI will eventually exceed the capabilities of humans. Shojiro Nishio, president of Osaka University, has been working on computer research for many years. We asked him what AI-related arguments and issues we can expect in the future.

Imitating the human brain

The Yomiuri Shimbun: What is AI?

Shojiro Nishio: AI is technology that uses computers and software to do the work of a human brain. When a person uses software to issue a command to a computer, the computer uses an enormous amount of data to make calculations and quickly produce an answer. Computers can also learn to give answers by themselves.

Q: How do computers learn that?

A: They use methods such as deep learning, which imitates the mechanisms of the human brain. While repeating calculations many times as instructed by software, computers learn a pattern. They then become able to create judgment criteria themselves to produce an answer independently of any instructions given to them via software. The reason AI is attracting attention is because the technological environment to make it possible now exists.

Q: What kind of environment is that?

A: First of all, there has been an explosive increase in the amount of data available that can be used for calculations. The more a computer can use large amounts of reliable data to make calculations, the more accurate its answers will be.

Q: How large is that increase in data?

A: According to a survey of a U.S. company, the amount of data that existed in the world in 2013 was about 4.4 trillion gigabytes — 1 gigabyte corresponds to the volume of information contained in up to 1,000 books. This amount of data is expected to increase to 44 trillion gigabytes by 2020. If you stored these amounts of data on 7-millimeter thick tablet devices with 128 gigabytes of storage and stacked the tablets on top of each other, in 2013 you would have gotten about two-thirds of the way from the earth to the moon. In 2020, or so people say, it will be 6.6 times that distance.

Q: That’s an incredible amount of data. Computing abilities must also need to improve to keep up.

A: Indeed. It is said that computers’ processing power doubles every year and a half. In reality, however, it has been increasing even faster. The price of related equipment is also becoming cheaper.

Because these computers work like a human brain while leveraging huge amounts of data and great processing power, their productivity is in some cases higher than that of humans. That is why some wonder if AI will take jobs away from people.

Verifying processes

Q: Are there any technical problems anymore?

A: Yes, there is still a big challenge, namely what to do about the “black box” of AI.

Q: What is that?

A: Sometimes humans cannot determine how AI arrived at a certain answer. In other words, it’s a black box.

An AI called AlphaGo won a Go match against a professional Go player from South Korea in March last year. In May this year, it defeated a Chinese player, who is said to be the best in the world. However, during those matches, the AI made moves that humans found difficult to understand. It’s not just like this with Go — similar things are happening in other areas as well.

Q: That reminds me — there are people who worry that AI will suddenly “go mad.” However, it could be that AI is producing correct answers, and people are simply unable to follow it or incapable of understanding it.

A: That’s one way of looking at it. I am more concerned about what happens if AI produces incorrect answers or develops in a way that humans cannot control. That sort of thing happens more and more often as AI starts getting used to real-life situations. Take self-driving cars — it’s dangerous if you cannot explain why a car made the judgment it did.

Q: Is there any way to solve the black box problem?

A: You can check all the processes that the computer went through, from the moment the software gave a command until the AI produced an answer. However, creating such software is difficult. Even if you do manage to create it, it will slow the processing speed of the computer until it is not practical to use. How to improve this checking is a major issue for research.

Guidelines

Q: How do people in other countries think about the black box?

A: They believe it’s a problem. The important thing is to have an international response.

Q: What do you mean by that?

A: AI is not something that exists as an island. Eventually, it will become connected to the internet and affect things on a global scale. The black box was an item on the agenda during the meeting of the information communications technology ministers of the seven major advanced economies that was held in Takamatsu last spring. At this meeting, Japan proposed the creation of international guidelines for AI development. The Internal Affairs and Communications Ministry announced a draft plan in June this year.

Q: What kind of plan is it?

A: The plan stipulates that developers should make sure that they can explain the results of the judgments their AI makes, that they can control their AI, and so on. The ministry will now seek to form a consensus both in Japan and abroad.

Q: Do you think it is enough to ask that people make sure to do things or to issue guidelines? Such things are not enforceable.

A: If you focus too much on making things enforceable, you could end up hindering AI development.

Q: Why is that?

A: Researchers and companies will lose motivation. Also, to preserve transparency, you need to make sure anything you come up with works with intellectual property and trade secrets. Checking whether something can be controlled also takes time and money. What matters is where we stand on issues like these. However, as we wrestle with them, it is essential to keep the black box in mind — also from an ethical point of view. (This interview was conducted by Yomiuri Shimbun Senior Writer Keiko Chino.)

Nishio specializes in information science and technology. He graduated from the Graduate School of Engineering, Kyoto University. He has held positions such as executive vice president, director and professor at Osaka University and director at the Cybermedia Center of the university. He assumed his current position in 2015. He was named a Person of Cultural Merit in 2016.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence