AI predictions 2020: Artificial Intelligence grows up

Source: itproportal.com

Over the last few years, artificial intelligence (AI) has been the enfant terrible of the business world: a technology full of unconventional and sometimes controversial behaviour that has shocked, provoked and enchanted audiences worldwide.

But now it’s time for AI to grow up. Businesses and consumers are tired of having the same debates around the hype vs reality of AI. In 2020, I see three opportunities for this to happen across responsibility, advocacy and regulation.

Prediction #1: AI advocacy groups will fight back

As AI becomes more pervasive, we’re likely to see those wronged by it inspired to take action. Think about someone being denied rightful entry to a country due to inaccurate facial recognition, misdiagnosed by disease-seeking robotic technology, denied access to a loan because a new type of credit score rates them poorly, or being incorrectly blamed as the cause of an car accident by the insurance company mobile app on their phone. What recourse does a person have when AI has an unfair and severe impact on their life?

It should go without saying that the world is already an unfair place. There are many ways in which people are treated unfairly, with advocacy groups to match – the same will be true of AI.

To combat cases of discrimination, consumers and their advocacy groups will demand access to the information and processes upon which AI systems arrive at their decisions. AI advocacy will provide empowerment and drive significant debates between experts on how best to approach the data processing and AI model development process.

Businesses wishing to get ahead of the coming storm should look to the model provided by credit scoring and focus first on data processing. If someone is denied credit, the would-be credit issuer is required to provide an explanation as to why; if a consumer feels that an item(s) on a credit bureau report is inaccurate they can file a dispute to request an investigation and its removal.

Prediction #2: More AI regulation is on the way

Over the next year, I predict we will see the rise of international standards to define a framework for safe and trusted AI. I hope to see AI experts support and drive regulation of the industry, to ensure fairness and inculcate responsibility.

Leaders in many industries hold a blanket view that government regulation is an inhibitor of innovation. With AI, this bias couldn’t be further from the truth – without proper regulation, consumer confidence and appetite for AI will be compromised by faulty and potentially dangerous deployments. Already, regulators are legislating both to protect consumers and to promote more widespread AI usage. In a whitepaper published in February 2020, the European Commission states: ‘as digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake.’

Article 22 of the General Data Protection Regulation (GDPR) was a first step in this direction. It states:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her… the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

Article 22 essentially stipulates that individuals have a right to request human intervention when decisions are taken by an AI. We’ve also seen similar measures outside the EU – with the California Consumer Privacy Act, for example. However, it’s worth noting that government agencies often are making demands of technology about which they have little understanding.  There is a stark dearth of AI experts in the halls of most governments, which is why cooperation between industry and regulators is so crucial; the White House, for example, has held AI summits with industry leaders to advance the government’s understanding.

  • Is it worth investing in artificial intelligence?

Prediction #3: The rise of explainable, ethical, and defensive AI

As the adoption of AI-based technologies continues to evolve, it’s clear that the industry as a whole will come under increased scrutiny. In order to satisfy demand from regulators, consumers, and advocacy groups, AI models will need to hit three key pillars: explainable, ethical, and defensive.

  • Explainable AI involves creating models that are auditable and comprehensible to human auditors – opening up the ‘black box’. Using blockchain to create a clear, immutable record of decisions is one method of doing so. It’s essential that explainability is built into models, or else ethical AI will be impossible to implement.  In 2020, it will no longer be acceptable to not have a defined AI model development process that can be audited, and directly fits into the model governance processes designed to protect the business and customers from AI built poorly.
  • Ethical AI involves taking precautions to expose what models have learned, and if it could impute bias. Taking precautions to isolate the input data fields used by models may seem sufficient – ‘if I don’t use age, gender, or race in my model, it’s not biased.’ However, upon deeper inspection, models can often produce outcomes that are biased – for example, if a model includes the brand and version of an individual’s phone, that data can be related to the ability to afford an expensive phone — a characteristic that can impute income.  In 2020, organisations will actively discuss the machine learning architectures that are approved for development, much like the company’s standards for software development and approved lists of dependent software component. Data scientists will not be able to choose their favourite technique willy-nilly.  Machine learning architectures will start to be designed to be explainable first, and predictive second given the importance of being responsible with AI.
  • Defensive AI is a response to the inevitable rise of offensive AI: models used by criminals to, for example, attack a bank’s systems. Any time an AI model is deployed, criminals have the chance to learn about how it responds to inputs and use that information to train models to exploit it. Defensive AI models selectively deceive or return incorrect outputs if they detect they’re being monitored by an offensive AI. They might return scores that are backwards or create patterns that make the adversary modelling data set inaccurate and consequently the attacker’s AI less effective.  In 2020, with the move to rapid digitalisation of our businesses, AI will play a role on aggressively monitoring abnormal usage or intrusions from a cyber perspective, and unsupervised AI will monitor massive attacks on infrastructure.  AI models will need “virtual bodyguards” to ensure it operates appropriately.

As AI grows to be a pervasive technology, there is currently little trust of the morals and ethics of many companies that use it. As a data scientist, that is disheartening – but I believe there are ways that AI innovators can respond to the pressure they’ll begin to come under from governments and their customers.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence