Beware the dark side of artificial intelligence

Source – thestar.com

I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.

Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.

Once they can do that, we face two serious issues.

First, how do we teach these creatures to tell right from wrong — in our own self defence?

Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.

Kurzweil talks about a conference in 1999 of A.I. experts where a poll was taken about when they thought the Turing test (when computers pass humans in intelligence) would be achieved. The consensus was 100 years. And a good contingent thought it would never be done. Today, Kurzweil thinks we’re at the tipping point toward intellectually superior computers.

A.I. brings together a combination of mainstream technologies that are already having an impact on our everyday lives. Computer games are a bigger industry than Hollywood. Health-care diagnosis and targeted treatments, machine learning, public safety and security and driverless transportation are a few of the current applications.

But, what about the longer term implications?

Physicist Stephen Hawking warns, “ … the development of full artificial intelligence could spell the end of the human race. Once humans develop full A.I., it will take off on its own and redesign itself at an ever-increasing rate … Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Speaking at an MIT symposium last year Tesla CEO, Elon Musk said, “I think we should be very careful about A.I. If I were to guess what our greatest existential threat is, I’d say it’s probably that. With artificial intelligence we are summoning the demon.”

Bill Gates wrote recently, “I am in the camp that is concerned about super intelligence.” Initially, he thinks machines will do a lot of work for us that’s not super challenging. A few decades later their intelligence will evolve to the point of real concern.

They are joined by Stuart Armstrong of the Future of Humanity Institute at Oxford University. He believes machines will work at speeds inconceivable for humans. They will eventually stop communicating with us and take control of our economy, financial markets, health care and much more. He warns that robots will eventually make us redundant and could take over from their creators.

Last year, Musk, Hawking, Armstrong and other scientist and entrepreneurs signed an open letter. It acknowledges the great potential of A.I., but warns that research into the rewards has to be matched with an effort to avoid its potential for serious damage.

There are those who hold less pessimistic views. Many of them are creators of advanced A.I. technology.

Rollo Carpenter, CEO of Cleverbot, is typical. His technology learns from past conversations. It scores high in the Turing test because it fools a large proportion of people into believing they’re talking to a human. Carpenter thinks we are a long way from full A.I. and there is time to address the challenges.

Meanwhile, what’s being done to teach robots right from wrong, before it’s too late? Quite a lot, actually. Many who teach machines to think agree that the more freedom given to machines the more they will need “moral standards.”

The virtual school, Good AI, is a prime example. Its mission is to train artificial intelligence in the art of ethics: how to think, reason and act. The students are hard drives. They’re being taught to apply their knowledge to situations they’ve never faced before. A digital mentor is used to police the acquisition of values.

Other institutions are teaching robots how to behave on the battlefield. Some scientists argue robot soldiers can be made ethically superior to humans. Meaning they cannot rape, pillage or burn down villages in anger.

Despite these precautions, it’s clear artificial intelligence applications are advancing at a faster rate than our “moral preparedness.” If this naive condition persists the consequences could be catastrophic.

Related Posts

Subscribe
Notify of
guest
8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
8
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence