Will Artificial Intelligence override human intelligence and experiences?

Source – yourstory.com

Doomsday theorists say Artificial Intelligence and the machines that use it will destroy mankind. While that may be a stretch, there is some merit in the argument that AI will surely challenge human thinking and behaviour – to what outcome, that remains to be seen.

Take the example of a ride-hailing app – Uber, Ola or any other. You, the passenger, and the driver may know the route to a destination, but both are compelled to follow the one calculated by the AI engine or else the driver is penalised. Human intelligence such as experience with traffic patterns, temporary blockages due to repairs and construction, or even shorter routes off the main road are not taken into consideration.

Seen at a macro level, would this create a loss of identity where human intelligence and experience of both the driver and the customer is overridden by a machine? In simple terms, here, a human has been displaced by AI. We may not accept it, but this is, in essence, a machine planning what is best for a human.

Man’s inventions have sought to reduce human effort and this is apparent now more than ever. Every time electronic mediums evolve, they tend to make services obsolete, and now, they are making human decision-making obsolete too.

What is the basis that machines take decisions on? The answer here is data – hundreds and thousands of gigabytes of data is collected across the world – its collected when you walk into a store and buy a toothpaste, it is collected when you pay for a ticket online, its collected when you visit your doctor – the list is endless.

“Data helps you react fast to consumer needs and helps companies address them faster,” says Partha De Sarkar, CEO of Hinduja Global Solutions. He says statistical modelling, thanks to modern data libraries and computing power, has combined with AI and Machine Learning algorithms to throw up insights about a customer like never before.

AI is the beginning of a human-machine partnership but this partnership should start off with the coming together of many minds – sociologists, scientists, and engineers – who must deliberate on the effects of AI on communities and individuals.

“In the end, it is the treatment of the data where biases creep in,” says Varun Mayya, co-founder of Avalon Labs. He says every founder must be responsible for the AI platforms they up even before they get consumers and clients to use them.

Experiences make each person different, but that is not exactly how AI works. Its algorithms bucket humans in to different date types, disregarding cultures, and preferences.

The algorithm bias

The cognitive revolution, touted as the next best thing in AI, thus falls flat when engineers use data to typecast individuals in a data set. “It is important for those claiming to use AI for consumer services to work with psychologists and sociologists before claiming their systems are representative of all communities and races,” says Nischith Rastogi, co-founder of Locus, a logistics tech company.

One such example where machine learning models erred with biases is the underwriting of loans. The machine set higher rates for individuals who it thought came from certain communities, income brackets and geography, not taking into account individuals who had the ability to service a loan.

Biases creep into AI fast. It is something that startups and corporates should be cognisant of,” says Nischith.

The question then is, why is data biased? It starts from the collection of this data and medium it is captured from – the smartphone.

Smartphones create billions of data points about our food habits, fitness regimens, conversations, shopping lists, and payments. Here are a few biases thrown in by AI –

Entertainment: When several members of a family together watch an online streaming service, recommendations are based on past selections. Now, these may be of a particular individual and not necessarily what would serve a common interest.

According to a blog by PWC there’s a need to understand the bias in data, the strengths of the algorithms used, and “generalisability” of unseen data.

The blog adds that while the governance structure used for standard statistical models can be used for machine learning, there are a number of additional elements of software development that must be considered. PWC continues to warn that the tests machine learning models “go through” need to be significantly more robust, and a machine learning governance quality assurance framework will make developers more aware of statistical and software engineering constructs that the model operates within.

According to IBM, AI systems are only as good as the data we put into them. Poor data can contain racial, gender, or ideological biases and many AI systems continue to be trained using bad data, making it an on-going problem. “But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful,” says IBM in its blog.

Retail and fashion: If you shop for fashion or beauty products, there is not only peer pressure to contend with now, but that from AI recommendations as well. AI today tends to attack you with a plethora of choices. And with fashion comes its ugly cousin – body shaming!

Earlier, the written word, in the form of fashion magazines, carried bias with pictures, and now, the same biases are carried over when building AI recommendations. With younger individuals taking to smartphones, these ‘recommendations’ may lead to unreasonable expectations from oneself.

Food and life:  Everyone wants to live healthy and it is widely understood that cultural moorings play a big role in what one can and cannot eat. The world of food and nutrition apps, however, tends to standardise profiles into broad strokes, and fitting people in broad data buckets.

“We are training our data models to be as robust as possible when it comes to recommendations. The algorithms learn only if developers ask the right questions,” says Tushar Vashist, co-founder of Healthifyme.

No wonder then that governments are beginning to sit up and take notice, and action. The UK parliament has commissioned a study on AI and the ethics surrounding its applications. The House of Lords-appointed Committee to “consider the economic, ethical and social implications of advances in artificial intelligence”, was set up on June 29, 2017, and will seek answers to five key questions:

  • How does AI affect people in their everyday lives, and how is this likely to change?
  • What are the potential opportunities presented by Artificial Intelligence for the UK? How can these be realised?
  • What are the possible risks and implications of Artificial Intelligence?  How can these be avoided?
  • How should the public be engaged with in a responsible manner about AI?
  • What are the ethical issues presented by the development and use of Artificial Intelligence?

There are also strong voices around the world on the need for regulatory bodies for Artificial Intelligence to study the ethics of AI.

Back home, what do Indian policymakers have to say about this? Nothing much, is the simple answer.

The Niti Aayog, which creates broad policy frameworks, is keen on creating opportunities for Indians to invest in AI, but is mum on the moral and ethical frameworks of the technology.

“We absolutely need auditability and explainability,” says K M Madhusudan, CTO of Mindtree. He adds there are two aspects to this – one is for serious enterprise-level AI adoption, for which technologists must ensure AI can explain why it made a particular decision. The second is to ensure it is not biased.

The list of biases can be endless. But, it’s time for startups using AI to wake up and smell reality. It is in their interest to do so because they will hopefully soon be liable for the instructions or recommendations made by the AI. To avoid this, one must venture into creating reams of data before providing choices to individuals. In the end we are just one big data set.

Related Posts

Subscribe
Notify of
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
3
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence