Artificial intelligence won’t rule the world so long as humans rule AI
What do the Pentagon, the Vatican and some of the big players in Silicon Valley have in common? They have all embraced ethical principles to guide the development of artificial intelligence that are superficially big on ambition, but sadly lacking any teeth.
Getting a framework to guide AI is critical because of the rapid development of algorithms that analyse and repurpose the huge troves of personal information that governments and private businesses collect from us to inform the development of everything from autonomous weapons to our credit rating.
What seems to be emerging, though, is a feel-good tick-a-box process that looks good in a prospectus but does little to protect the public interest in the real world. It’s really just ethics-washing.
On February 25 the US Department of Defence – informed by an Innovation Board made up of executives from Google, Microsoft, Facebook and Amazon – proclaimed it would be guided by five ethical principles: its AI would be “responsible, equitable, traceable, reliable and governable”.
Four days later, the Vatican issued a paper calling for “new forms of regulation” of AI based on the principles of “transparency, inclusion, responsibility, impartiality, reliability, security and privacy”.
The striking thing about both these pronouncements is the degree to which they align with the official line from Silicon Valley, which couches ethics as a set of voluntary principles that will guide, rather than direct, the development of AI.
By proposing broad principles, which are notoriously difficult to define legally, they avoid the guard rails or red lines that would give genuine oversight over the way this technology develops.
The other problem with these voluntary codes is they will always be in conflict with the key drivers of technological change: to make money (if you are a business) or save money (if you are a government).
But there’s an alternative approach to harnessing technological change that warrants serious consideration. It is proposed by the Australian Human Rights Commission. Rather than woolly guiding principles, Commissioner Ed Santow argues that AI should be developed within three clear parameters.
First, it should comply with human rights law. Second, it should be used in ways that minimise harm. Finally, humans need to be accountable for the way AI is used. The difference with this approach is that it anchors AI development within the existing legal framework.
To legally operate in Australia, under this proposal, the development of artificial intelligence would need to ensure it did not discriminate on the grounds of gender, race or social demographic, either directly or in effect.
The AI proponents would also need to show they had thought through the impact of their technology, much like a property developer needs to conduct an environmental impact statement before building.
And critically, an AI tool should have a human – a flesh-and-blood person – who is responsible for its design and operation.
How would these principles work in practice? It’s worth looking at the failed robodebt program, under which recipients of government benefits were sent letters demanding they repay money because they had been overpaid.
If it had been scrutinised before it went live, robodebt is likely to have been found discriminatory, as it shifted the onus of proof onto people from society’s most marginalised groups to show their payments were valid.
If it had been subject to a public impact review, the glaring anomalies and inconsistencies in matching Australian Tax Office and social security information would have become apparent before it was trialled on vulnerable people. And if a human had been accountable for its operation, those who received a notice would have had a course of review, rather than feeling as though they were speaking to a machine.
The whole costly and destructive debacle might have been prevented.
Embracing a future where these “disruptive” technologies remake our society guided by voluntary ethical principles is not good enough. As Robert-Elliott Smith observes in his excellent book Rage Inside the Machine, the idea that AI is amoral is bunkum. The values and priorities of the humans who commission and design it will determine the end product.
This challenge will become more pressing as algorithms begin to process banks of photos and video that purport to “recognise” individuals, track their movements and predict their motivations. The Human Rights Commission report calls for a moratorium on the use of this technology in high-stakes areas such as policing. It seeks to protect citizens from “bad” applications, but also to provide an incentive for industry to support the development of an enforceable legal framework.
Champions of technology may well argue that government intervention will slow down development and risk Australia being “left behind”. But if we succeed in ensuring AI is “fair by design”, we might end up with a distinctly Australian technology, which reflects our values, to share with the world.