Blindly using data to make decisions doesn’t create ethical AI systems

Source: themanufacturer.com

It’s no secret that decisions made by artificial intelligence (AI) systems will increasingly impact our lives both professionally and personally, bringing a multitude of benefits to the way we live our lives. However, big decisions often come with an ethical price tag.

For example, take AI in the Human Resources (HR) field. Many manufacturing businesses are starting to use AI and machine learning tools to screen the hundreds if not thousands of CVs they receive when hiring new employees.

To efficiently manage these applications, companies need to save time and human effort while also finding qualified and desirable candidates to fill the role.

However, even the best trained AI system will have its flaws. Not because it wants to, but because it’s been trained to – by us by feeding historical data.

For example, a company has advertised a vacancy for a shop floor assistant at one of its plants. Historical data suggests that the large majority of people undertaking this role are male.

While developing its learning capabilities, the AI is likely to only engage or respond to male applicants, therefore female applicants have a higher chance of missing out on the position. While not a manufacturer, this scenario has been demonstrated in the case against Amazon, whose AI-based tool used for part of its HR process was discriminating against women.

As a general technology, AI can be used in many ways, with businesses deciding how and where. However, with so few examples of how it can go wrong (at least in the public domain), businesses are blindly feeding AI systems data with little to no regard of the ethical implications.

Why ethical AI is so important

Ethics are critical to automated decision-making processes where AI is used. Without some consideration for how decisions are naturally made by humans, there is no way we can expect our AI systems to behave ethically.

Take the Volkswagen emissions scandal. Back in 2015, thousands of diesel VWs were sold across the globe with software that could sense test scenarios and change their performance to show reduced carbon emissions. Once back on the road, they would switch back to ‘normal’, emitting up to 40% more carbon dioxide than the tests would have shown.

In this case the test engineers were following orders, so the question of who was responsible might have been unclear. However, the judicial response was that the engineers could have raised the issue or left the organisation, so liability lay with them.

The same could apply to data scientists in another scenario. If there is the realisation that elements of decision-making could cause bias and harm, they have the option and obligation to flag or depart.

A final example might be the recent Boeing 737 Max disaster, where decisions made by software were overriding the decisions made by qualified pilots, leading to numerous air crashes and the company’s entire fleet being grounded.

These fledgling software devices, if not trained properly, have the potential to completely damage the reputation of a company, especially while liability is still in discussion.

How biases are introduced and who’s responsible

Although humans are the main source of these biases, there can also be bias in data, and if we aren’t careful, AI will accentuate them.

A lack of representation in industry is also increasingly being cited as the root cause of the problem in data and while the question of liability if still being widely debated, I believe it’s important for business leaders to take more responsibility for unintentionally infusing bias in an AI system.

As humans, we will always be prone to making mistakes, but unlike machines we have ‘human qualities’ such as consciousness and judgement that come into play to correct the mistakes made over time.

However, unless these machines are explicitly taught that what they are doing is ‘wrong’ or ‘unfair’, the error will continue.

In my view, and I’m sure many others, blindly allowing these AI systems to continue making mistakes is irresponsible. And, when things do go wrong, which they inevitably will, we need to ask ourselves who is liable? Is it the machine, the data scientist or the owner of the data?

The question is still being debated within industry but as errors become more public, we will start to learn and understand when they’re investigated.

How can we remove these biases?

To ensure decision-making is fair and equal for all, manufacturers need to get better at thoroughly investigating the decision-making process to ensure there’s no bias on the part of the human, who will often act on an unintentional and unconscious basis.

This reduces or eliminates the chances of the biases being misinterpreted by the AI and potential errors being proliferated.

I’d like to see a benchmark set for businesses, either through a series of questions or a thorough checklist, to guarantee any bias on the part of the human is eradicated at the outset. The checklist would ensure all decision-making is fair and equal, accountable, safe, reliable, secure and addresses privacy aspects.

The checklist could be used for in-house data science teams, especially as an induction tool for new recruits, or for external companies that are outsourced by businesses to build and manage their AI systems.

If manufacturers do decide to outsource aspects of their machine learning capabilities, this checklist is especially pertinent as it acts as a form of contract, whereby any potential disputes over liability can more easily be resolved.

As we’re still in the early stages of AI, it’s unclear whether these measures would be legally binding, but they may go some way to proving – to an insurance company or lawyer – where liability lies.

If a manufacturer can demonstrate that a checklist has or hasn’t been followed, depending on whether the work has been kept in-house or otherwise, they are more protected than they might have been before.

Another part of benchmarking could be to ensure all data scientists within a business, whether new to the role or experienced technicians, take part in a course on ethics in AI.

This could also help people understand or remember the need to remove certain parameters from the decision-making process, for example NY gender biases. This way, when building a new AI system which takes male and female activity into account, they’ll know to deactivate the gender feature to ensure the system is gender neutral.

This article isn’t designed to scare people, rather, it’s to urge business leaders to stop overlooking potential biases in the automated-decision-making process to ensure decisions are fair and equal for everyone.

There’s always a chance that human bias will creep in, but it’s down to us to take the necessary steps to ensure processes are fair and transparent. And the quicker and more efficiently we set up a benchmark from which to work, the more likely we are to build fairer and more ethical AI systems.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence