Google runs into more flak on artificial intelligence

Source – economist.com

DISCOVERING and harnessing fire unlocked more nutrition from food, feeding the bigger brains and bodies that are the hallmarks of modern humans. Google’s chief executive, Sundar Pichai, thinks his company’s development of artificial intelligence trumps that. “AI is one of the most important things that humanity is working on,” he told an event in California earlier this year. “It’s more profound than, I don’t know, electricity or fire.”

Hyperbolic analogies aside, Google’s AI techniques are becoming more powerful and more important to its business. But its use of AI is also generating controversy, both among its employees and the wider AI community.

One recent clash has centred on Google’s work with America’s Department of Defence (DoD). Under a contract signed in 2017 with the DoD, Google offers AI services, namely computer vision, to analyse military images. This might well improve the accuracy of strikes by military drones. Over the past month or so thousands of Google employees, including Jeff Dean, the firm’s AI chief, have signed a petition protesting against the work; at least 12 have resigned. On June 1st the boss of its cloud business, Diane Greene, conceded to those employees that the firm would not renew the contract when it expires next year.

The tech giant also published a set of seven principles which it promises will guide its use of AI. These included statements that the technology should be “socially beneficial” and “built and tested for safety”. More interesting still was what Google said it would not do. It would “proceed only where we believe that the benefits substantially outweigh the risks,” it stated. It eschewed the future supply of AI services to power smart weapons or norm-violating surveillance techniques. It would, though, keep working with the armed forces in other capacities.

Google’s retreat comes partly because its AI talent hails overwhelmingly from the computer-science departments of American universities, notes Jeremy Howard, founder of Fast.ai, an AI research institute. Many bring liberal, anti-war views from academia with them, which can put them in direct opposition with the firm in some areas. Since AI talent is scarce, the firm has to pay heed to the principles of its boffins, at least to some extent.

Military work is not the only sticking-point for Google’s use of AI. On June 7th a batch of patent applications made by DeepMind, a London-based sister company, were made public. The reaction was swift. Many warned that the patents would have a chilling effect on other innovators in the field. The patents have not yet been granted—indeed, they may not be—but the request flies in the face of the AI community’s accepted norms of openness and tech-sharing, says Miles Brundage, who studies AI policy at the University of Oxford. The standard defence offered on behalf of Google is that it does not have a history of patent abuse, and that it files them defensively in order to protect itself from future patent trolls. DeepMind’s patent strategy is understood to be chiefly defensive in nature.

Whatever Google’s intent, there are signs that the homogeneity of the AI community may lessen in future. New paths are being created to join the AI elite, other than a PhD in computer science. Hopefuls can take vocational courses offered by firms such as Udacity, an online-education firm; the tech giants also offer residencies to teach AI techniques to workers from different backgrounds. That might just lead to a less liberal, less vocal AI community. If so, such courses might serve corporate interests in more ways than one.

Related Posts

Subscribe
Notify of
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
3
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence