Can AI Built to ‘Benefit Humanity’ Also Serve the Military?

Source: wired.com

Microsoft’s recent victory in landing a $10 billion Pentagon cloud-computing contract called JEDI could make life more complicated for one of the software giant’s partners: the independent artificial-intelligence research lab OpenAI.

OpenAI was created in 2015 by Silicon Valley luminaries including Elon Musk to look to the far horizon, and save the world. The newborn nonprofit said it had commitments totaling $1 billion and would work on AI “to benefit humanity as a whole, unconstrained by a need to generate financial return.” But OpenAI restructured into a for-profit this year, saying it needed more money to fulfill its goals, and took $1 billion from Microsoft in a deal that involves helping the company’s cloud division develop new AI technology.

Now Microsoft’s JEDI win raises the possibility that OpenAI’s work for the benefit of humanity may also serve the US military.

Asked if anything would prevent OpenAI technology reaching the Pentagon, the lab’s CEO, Sam Altman, said its contract with Microsoft requires “mutual agreement” before any particular technology from the lab can be commercialized by the software giant. He declined to discuss what OpenAI might agree to or what the company’s stance is on helping the US military. Microsoft declined to comment on its deal with OpenAI.

There’s reason to think fruits of the collaboration may interest the military. The Pentagon’s cloud strategy lists four tenets for the JEDI contract, among them the improvement of its AI capabilities. This comes amidst its broader push to tap tech-industry AI development, seen as far ahead of the government’s.

Secretary of defense Mark Esper said at a conference last week that building up military AI was crucial to stay ahead of the Pentagon’s primary competitors, China and Russia. “There are a few key technologies out there,” he said. “I put AI number one.”

OpenAI is known for flashy projects that push the limits of the technology by marshaling huge amounts of computing power behind machine-learning algorithms. The lab has developed bots that play the complex videogame Dota 2, software that generates surprisingly fluid text, and a robot control system capable of manipulating a Rubik’s Cube. None of those seem likely to be immediately useful to Microsoft or its customers, but the infrastructure that OpenAI has built to power its flashy demos could be adapted to more pragmatic applications.

Closer ties with the Pentagon—and the JEDI contract in particular—make some people in the tech industry uncomfortable.

Last year, employee protests, including from AI researchers, forced Google to say it would not renew a contract applying AI to drone surveillance imagery. Google also released a set of ethical principles for its AI projects, which allow for military work but forbid weapons development. The company later withdrew its JEDI bid, saying the contract conflicted with those new AI ethics rules.

Microsoft faced its own internal complaints after it won a $480 million Army contract in 2018 aiming to give US soldiers “increased lethality” by equipping them with the company’s HoloLens augmented-reality headset. Opposition also arose over its JEDI bid. Some employees released a protest letter urging the company to make like Google and withdraw it’s pitch for the contract on ethical grounds.

Microsoft CEO Satya Nadella has dismissed concerns about working with the military. “As an American company, we’re not going to withhold technology from the institutions that we have elected in our democracy to protect the freedoms we enjoy,” he said in an interview published by Quartz last week.

OpenAI has not made any similar public statement. The lab has urged caution about the power of AI, and it contributed to a report last year that raised the alarm about malicious uses of AI, including by governments. OpenAI policy and research staff were also credited in two recent reports concerned with military AI, from a Pentagon advisory group that drafted AI ethics guidelines, and from a Congressional commission that declared it a national security priority for government agencies to tap private-sector AI.

OpenAI’s leaders say that it is legally bound to a charter created by its original nonprofit incarnation, which still exists as a kind of overseer. The short document, less than 400 words, appears to leave the company’s commercial dealings unconstrained, so long as OpenAI judges that its influence over the deployment of AI is “for the benefit of all” and avoids any uses “that harm humanity or unduly concentrate power.”

Jack Poulson, founder of Tech Inquiry, which connects tech workers with civil society groups to promote industry accountability, says that document needs updating. “If OpenAI is unwilling to clarify how its charter to avoid ‘harm’ to ‘humanity’ relates to a billion-dollar business deal, then its public claims to benevolence deserve to be treated as marketing rather than an accountability mechanism,” he says. Poulson resigned from a senior research post at Google last year over the company’s (now defunct) work on a search engine designed to comply with Chinese political censorship.

Microsoft declined to say what it expects to get out of its OpenAI partnership, or whether any products from it would go through the company’s internal ethics review process for AI projects. Microsoft did say earlier this summer that its work with OpenAI would “adhere to the two companies’ shared principles on ethics and trust”—whatever those may be.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence