DON’T MAKE AI ARTIFICIALLY STUPID IN THE NAME OF TRANSPARENCY

Source – wired.com

ARTIFICIAL INTELLIGENCE SYSTEMS are going to crash some of our cars, and sometimes they’re going to recommend longer sentences for black Americans than for whites. We know this because they’ve already gone wrong in these ways. But this doesn’t mean that we should insist—as many, including the European Commission’s General Data Protection Regulation, do—that artificial intelligence should be able to explain how it came up with its conclusions in every non-trivial case.

Demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid. And given the promise of the type of AI called machine learning, a dumbing-down of this technology could mean failing to diagnose diseases, overlooking significant causes of climate change, or making our educational system excessively one-size-fits all. Fully tapping the power of machine learning may well mean relying on results that are literally impossible to explain to the human mind.

Machine learning, especially the sort called deep learning, can analyze data into thousands of variables, arrange them into immensely complex and sensitive arrays of weighted relationships, and then run those arrays repeatedly through computer-based neural networks. To understand the outcome—why, say, the system thinks there’s a 73 percent chance you’ll develop diabetes or there’s a 84 percent chance that a chess move will eventually lead to victory—could require comprehending the relationships among those thousands of variables computed by multiple runs through vast neural networks. Our brains simply can’t hold that much information.

There’s lots of exciting work being done to make machine learning results understandable to humans. For example, sometimes an inspection can disclose which variables had the most weight. Sometimes visualizations of the steps in the process can show how the system came up with its conclusions. But not always. So we can either stop always insisting on explanations, or we can resign ourselves to maybe not always getting the most accurate results possible from these machines. That might not matter if machine learning is generating a list of movie recommendations, but could literally be a matter of life and death in medical and automotive cases, among others.

Explanations are tools: We use them to accomplish some goal. With machine learning, explanations can help developers debug a system that’s gone wrong. But explanations can also be used to to judge whether an outcome was based on factors that should not count (gender, race, etc., depending on the context) and to assess liability. There are, however, other ways we can achieve the desired result without inhibiting the ability of machine learning systems to help us.

Here’s one promising tool that’s already quite familiar: optimization. For example, during the oil crisis of the 1970s, the federal government decided to optimize highways for better gas mileage by dropping the speed limit to 55. Similarly, the government could decide to regulate what autonomous cars are optimized for.

Say elected officials determine that autonomous vehicles’ systems should be optimized for lowering the number of US traffic fatalities, which in 2016 totaled 37,000. If the number of fatalities drops dramatically—McKinsey says self-driving cars could reduce traffic deaths by 90 percent—then the system will have reached its optimization goal, and the nation will rejoice even if no one can understand why any particular vehicle made the “decisions” it made. Indeed, the behavior of self-driving cars is likely to become quite inexplicable as they become networked and determine their behavior collaboratively.

Now, regulating autonomous vehicle optimizations will be more complex than that. There’s likely to be a hierarchy of priorities: Self-driving cars might be optimized first for reducing fatalities, then for reducing injuries, then for reducing their environmental impact, then for reducing drive time, and so forth. The exact hierarchies of priorities is something regulators will have to grapple with.

Whatever the outcome, it’s crucial that existing democratic processes, not commercial interests, determine the optimizations. Letting the market decide is also likely to lead to, well, sub-optimal decisions, for car-makers will have a strong incentive to program their cars to always come out on top, damn the overall consequences. It would be hard to argue that the best possible outcome on highways would be a Mad Max-style Carmaggedon. These are issues that affect the public interest and ought to be decided in the public sphere of governance.

But stipulating optimizations and measuring the results is not enough. Suppose traffic fatalities drop from 37,000 to 5,000, but people of color make up a wildly disproportionate number of the victims. Or suppose an AI system that culls job applicants picks people worth interviewing, but only a tiny percentage of them are women. Optimization is clearly not enough. We also need to constrain these systems to support our fundamental values.

For this, AI systems need to be transparent about the optimizations they’re aimed at and about their results, especially with regard to the critical values we want them to support. But we do not necessarily need their algorithms to be transparent. If a system is failing to meet its marks, it needs to be adjusted until it does. If it’s hitting its marks, explanations aren’t necessary.

But what optimizations should we the people impose? What critical constraints? These are difficult questions. If a Silicon Valley company is using AI to cull applications for developer positions, do we the people want to insist that the culled pool be 50 percent women? Do we want to say that it has to be at least equal to the percentage of women graduating with computer science degrees? Would we be satisfied with phasing in gender equality over time? Do we want the pool to be 75 percent women to help make up for past injustices? These are hard questions, but a democracy shouldn’t leave it to commercial entities to come up with answers. Let the public sphere specify the optimizations and their constraints.

But there’s one more piece of this. It will be cold comfort to the 5,000 people who die in AV accidents that 32,000 people’s lives were saved. Given the complexity of transient networks of autonomous vehicles, there may well be no way to explain why it was your Aunt Ida who died in that pile-up. But we also would not want to sacrifice another 1,000 or 10,000 people per year in order to make the traffic system explicable to humans. So, if explicability would indeed make the system less effective at lowering fatalities, then no-fault social insurance (governmentally-funded insurance that is issued without having to assign blame) should be routinely used to compensate victims and their families. Nothing will bring victims back, but at least there would be fewer Aunt Ida’s dying in car crashes.

There are good reasons to move to this sort of governance: It lets us benefit from AI systems that have advanced beyond the ability of humans to understand them.

It focuses the discussion at the system level rather than on individual incidents. By evaluating AI in comparison to the processes it replaces, we can perhaps swerve around some of the moral panic AI is occasioning.

It treats the governance questions as societal questions to be settled through existing processes for resolving policy issues.

And it places the governance of these systems within our human, social framework, subordinating them to human needs, desires, and rights.

Related Posts

Subscribe
Notify of
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
3
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence