The ethics of Machine Learning: Just because we ‘can’ doesn’t mean we ‘should’

Source: itproportal.com

Richard Pilling considers the ethical responsibilities of delivering Machine Learning solutions and suggests that the immense possibilities opened up by ML should always be tempered with an understanding of potential consequences.

I was lucky to grow up in the 80s as the youngest of three children – a happy time of riding around on my Raleigh Grifter, watching Knight Rider, and playing on my Commodore 64. My

mum had a totemic and often repeated phrase of “just because you can, doesn’t mean you should” – which has always stuck with me; it was part of my mum teaching me right from wrong. It’s no surprise she went on to have a career as a social worker; helping disadvantaged children find safety, peace of mind, and good families. She instilled in them the same strong ethics as she taught me. At that time my dad was the head of Quality Assurance for a multinational, had helped invent Astroturf, and went on to write what would become ISO 9001. In essence, my upbringing was heavily influenced by a strong sense of ethics and a deep understanding of the value of innovation, and quality.

The Machine Learning revolution

We’re on the cusp of a worldwide revolution in Machine Learning – a wonderful set of tools and technologies, which have huge potential to either make the world a better place or further divide it. There’s always an ethical side to any new and potentially market-changing technology. However, unlike previous industrial revolutions, machine learning is focused on the ‘why’ of things, rather than the ‘how’ – or to phrase it another way: for the first time we’re inventing a better thinker, rather than a better ‘do-er’.

Companies have the ability to make systems that will fundamentally transform billions of lives,  all at the click of a button. Consider areas such as lack of transparency in decision making, unconscious bias, the law of unintended consequences – all to rain down upon millions and millions of unsuspecting people – the impact on society is, and will increasingly be huge.

Decisions framed as unbiased and data-based are still decisions based on what the machines have been taught from humans. We are responsible for determining what defines a “good” outcome, although it may not be a universally positive result. It’s more likely that these decisions are only useful in certain contexts and not others.

The machine, or its algorithms, does not have a sense of right or wrong as naturally have. We have a moral sense, that machines cannot learn (at least for the foreseeable future), as it’s making all decisions on data and acts as it’s told to do. It’s crucial to understand how these systems make decisions, to know why they’re being made.

There are some types of machine learning or certain algorithm classes that are more transparent than others. Some people are specifically making algorithms that are designed to be transparent so you can follow and expand on the thought process. A system that recommends, not makes, decisions and is able to explain them will be of great help across many industries.

I want to make sure that we’re not being too negative about machine learning. It is a really useful set of tools and technologies. It’s just, at the end of the day the fool with a tool is still a fool, just quicker.

Machine learning can build a bright future for us – but I’d like any singularity which may happen to be an inclusive benevolent one, rather than a nightmarish world of an all-knowing BigBrother. If you’re running at the pace of innovation, and it’s making you run as fast as you can, you need to stop and check that you do the right thing.

The time to think about this is now, before the cat is fully out of the bag.

Tech companies need to deeply consider the effects of their work – their role is to advise and help companies on the use of innovative technologies and make sure they don’t fall down a rabbit hole or ride too far on the hype-train. It’s a moral imperative to look at the ethical side of any technology, as well as its potential business value. The impact of machine learning is a topic that garners much attention, and this conversation mirrors the tenets of my upbringing: innovation, quality, ethics.

I have the good fortune to, at times, mentor bright-eyed newcomers to the world of big data and ML. I advise them to envisage the future ramifications of their work, and I often think to myself, just because you can, doesn’t mean you should!

Some things are best left to human beings…

I’d like my children to have as free and as happy a childhood as I did, discovering things rather than having an algorithm recommend them – as that can lead to an echo chamber. In this case, we should remember that some things are best left to humans. When growing up, my sisters gave me a love of music – especially 70’s disco and 80’s new-romantics. While I’m sure a streaming service algorithm could have led me that way eventually – I’m certain that the music wouldn’t mean anywhere near as much to me!

There are clearly thousands of use cases where ML can help optimise our experiences. However, let’s not forget the need for a ‘human’ element to enrich those experiences.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence