Four Rules To Guide Expectations Of Artificial Intelligence

Source:- forbes.com.

Is the world too chaotic for any technology to control? Is technology revealing that things are even more chaotic and uncontrollable than first thought? Artificial intelligence, machine learning and related technologies may be  underscoring a realization Albert Einstein had many decades ago: “The more I learn, the more I realize how much I don’t know.”

When it comes to employing the latest analytics in enterprises and beyond, even the best technology — predictive algorithms, artificial intelligence — can’t explain, and even reveal, the complexity and interactions that shape events and trends.  That’s the word from Harvard University’s David Weinberger who explains, in his latest book, how AI, big data, science and the internet are all revealing a fundamental truth: things are more complex and unpredictable than we’ve allowed ourselves to see.

“Our unstated contract with the universe has been if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus at least somewhat pliable to our will,” Weinberger writes in Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility. “But now that our tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it.”

The irony is the systems we are creating to make some sense of the world — such as machine learning and deep learning — are only creating more mystery. To illustrate this point, Weinberger points to Deep Patient, an AI program developed by Mount Sinai Hospital in New York in 2015. Deep Patient was fed medical records of 700,000 patients as jumbled data, with no framework to organize it or any instructions as to how the data should be used. Yet, even with just three incomplete pieces to analyze, Deep Patient is able to diagnose the likelihood of patients developing some diseases more accurately than doctors.

There’s only one catch to Deep Patient’s success– no one knows why or how it comes to its conclusions. “The number and complexity of contextual variables mean that Deep Patient simply cannot always explain its diagnoses as a conceptual model that its human keepers can understand.,” Weinberger says.

The bottom line, Weinberger says, is that success with AI and automation stems from accepting and leveraging results delivered, and not trying to decipher the explanations behind the data fed into these systems. If A/B testing shows text on a website results in more traffic than a photo placement, go with it.

He outlines four rules of the road that should guide our expectations when it comes to machine learning and deep learning applications:

  • Prioritize the workflow of AI systems, but leave it to AI to determine how results are produced.  Today’s deep learning models “are not created by humans, at least not directly. Humans choose the data and feed it in, humans head the system toward a goal, and humans can intercede to tune the weights and outcomes. But humans do not necessarily tell the machine what features to look for. Because the models deep learning may come up with are not based on the models we have constructed for ourselves, they can be opaque to us.”
  • Throw away the old models that shaped expectations. “Deep learning models are not generated premised on simplified principles, and there’s no reason to think they are always gong to produce them.”
  • We shouldn’t be expected to understand what underlies AI decisions. “Deep learning systems do not have to simplify the world to what humans can understand,” Weinberger says. “Our old, simplified models were nothing more than the rough guess of a couple of pounds of brains trying to understand a realm in which everything is connected to, and influenced by, everything.”
  • Ultimately, it’s about the data. “Everything connected to everything means that machine learning’s model can constantly change. Changes in machine learning models can occur simply by retraining them on new data. indeed, some systems learn continuously.”

A question that comes out of this, of course, is whether to trust AI output when bias needs to be questioned or analyzed. While Weinberger’s book does not tackle the issues of human bias being baked into AI algorithms head-on — that is the subject of entire books in and of themselves — he points out that human biases find their way into results, which illustrate our own failings.  People seek explicability “to prevent AI from making our biased culture and systems even worse than they were before AI. Keeping AI from repeating, amplifying and enforcing existing prejudices is a huge and hugely important challenge.”

No matter how automated our decision-making becomes, critical thinking — human critical thinking — is still needed to run businesses and institutions. Humans need to be able to override or question the output of AI systems, especially if the process is opaque.  This is a critical skill that needs to part of every job, every training program and every course curriculum from now on.

Related Posts

Subscribe
Notify of
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
3
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence