Artificial Intelligence: The Problem of Making Machines too Human
Source – formtek.com
This past July, the press headlined a comment by Elon Musk where he said that Mark Zuckerberg’s understanding of AI was ‘limited’. After Musk’s warning earlier in the year that AI potentially is potentially the most dangerous threat to civilization, needs to be used with care, and should even be regulated, Zuckerberg commented that he disagreed with that analysis and sees AI as something extremely positive and not likely to be misused, causing Musk to dismiss Zuckerberg’s understanding of AI.
Ironically, just a few days after Musk’s comment, AI researchers at Facebook reported on a research project that went awry. The Facebook AI Research (FAIR) team was investigating the use of natural language used in negotiation. The team used machine learning to create bots that were trained using the language found in scripts from thousands of actual person-to-person negotiations. The bots then were allowed to interact with each other in negotiation tasks.
The results are fascinating and perhaps worrisome. Initial attempts resulted in conversations back and forth between the two bots but few negotiations ever resulted. In order to force more negotiations to complete, the researchers scored the bots on how quickly they could complete the negotiation and the profitability of the final deal struck.
Some interesting results are that the bots originally mimicked standard English but eventually began using a kind of shorthand in their conversation. Researchers say it represented a different language that a standard English speaker would not understand. Researchers also found that the bots learned from the negotiation scripts that the strategy of lying could result in better deals.
The researchers wrote that “we find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it. Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”