How Artificial Intelligence Can Make Us Better at Being Human

Source- strategy-business.com

Though many jobs are well suited for today’s artificial intelligence applications, dealing with human emotions may not, at first, seem like one of them. But what about the areas in which human emotion may get in the way?

Conservative estimates put the percentage of workplace harassment or discrimination that goes unreported at around 75 percent (pdf) — and the researchers who made those estimates found that the least common way people deal with the experience of harassment is to report it. The reasons for this are myriad, and include the fear of judgment or reprisals and the pain and difficulty of recalling emotionally charged situations. “People are shy and they don’t [necessarily] want to talk to a human being, so tech can help solve that,” says Julia Shaw, a London-based memory scientist and author of The Memory Illusion.

In February 2018, Shaw, along with two partners who are software engineers, launched Spot, a Web-based chatbot that uses artificial intelligence (AI) to help people report distressing incidents. The app is based on an interview technique developed by psychologists and used by police departments to ensure that a recorded narrative is as sound and accurate as possible; it also gives the person reporting the incident the option of remaining anonymous.

The chatbot learns from the user’s initial description and responses to its prompts, asking specific, but not leading, questions about what happened. Together, the user and the bot generate a time-stamped PDF report of the incident, which the user can then send to his or her organization or choose to keep. So far, more than 50,000 people have accessed Spot (the site does not keep records of who then goes on to make a report). And it’s available 24 hours a day, so you don’t need to make an appointment with HR.

“Evidence really matters in these cases, precision really matters in these cases,” Shaw says. What makes Spot useful is that it removes a layer of human interaction that is not only often fraught, but can also introduce inconsistencies in memory.

Spot is just one tool in an emerging market for machine learning–assisted apps that are tackling the juggernaut of human emotions. The National Bureau of Economic Research reports that the number of U.S. patent filings mentioning machine learning rose from 145 in 2010 to 594 in 2016; near the end of 2018, there are already 14 times the number of active AI startups that there were in 2000. And among them is a growing host of companies that are designing AI specifically around human emotion. Spot is intended to help end workplace discrimination, but according to Shaw, it’s also a “memory scientist for your pocket” that bolsters one of our weaknesses: emotional memory recall.

It might seem ironic, maybe even wrong, to employ machine learning and artificial intelligence to understand and game human emotion, but machines can “think” clearly in some specific areas where humans find it difficult.

The messaging app Ixy, for example, is marketed as a “personal AI mediator” that helps facilitate text chats; it previews texts to tell a user how he or she comes across to others, aiming to remove a layer of anxiety in human-to-human communication. And Israel-based Beyond Verbal already licenses its “emotions analytics” software, patented technology that gauges the emotional content of an individual voice, based on intonation. The tech is being used to help call centers fine-tune their employees’ interactions with customers and enable companies to monitor employee morale, and could be deployed to help AI virtual assistants better understand their users’ emotional state.

Yoram Levanon, chief science officer of Beyond Verbal and inventor of its technology, envisions even more ambitious applications, including virtual assistants that monitor people’s physical and emotional state by analyzing their vocal biomarkers. The app is able to recognize that how we say something may be more important than what we’re saying.

“AI is going to help us. Not replace us,” Levanon says. “I see AI as a complementary aid for humans. Making AI empathic and [capable of] understanding the emotions of humans is crucial for being complementary.”

Some organizations are already adopting similar AI, such as the audio deep learning developed by Australia-based company Sherlok. The Brisbane City Council uses the tech to scan emails, letters, forum discussions, and voice calls to uncover callers’ “pain points” and improve its own staff’s skills. Another emotion-led AI technology is being used to identify best practices for sales, and, in one example, a leading company used it to detect discrepancies in emotion and enthusiasm between C-suite executives on the quarterly analyst calls to gain better insight on performance.

At this point, you might be starting to hear dystopian alarm bells. Concerns that AI could be used to judge our personalities and not our performance — and incorrectly at that — are valid. AI has a problem with biases of all kinds, largely because the humans who build it do. Developers have a saying — “garbage in, garbage out” — meaning that the technology may only be as good or as fair as its data sets and algorithms.

In one of the more spectacular examples of AI gone wrong, Microsoft was forced to shut down its “social chatbot,” Tay, within 24 hours of its launch in March 2016. Tay was designed to learn from its conversations with real people on social media and personalize its responses based on previous interactions. But no one had counted on the trolls, who gorged Tay on a diet of racist, misogynist, homophobic, and grossly offensive chat. The chatbot soon began spouting its own contemptible comments.

Tay’s turn wasn’t an outlier. “If [AI tech] is not harnessed responsibly, there’s a risk that it leads to poor outcomes, increased bias, and increased discriminatory outcomes,” says Rob McCargow, director of artificial intelligence at PwC UK and an advisory board member of the All-Party Parliamentary Group on Artificial Intelligence in the United Kingdom. “There’s very much a double-edged sword to the technology.”

The consequences of irresponsible AI technology are potentially devastating. For example, in May 2016, investigative journalism organization ProPublica found that an algorithm used by the U.S. court system for determining whether a defendant was likely to pose a risk was wrongly labeling black people as potential reoffenders at twice the rate of whites. In this instance, AI isn’t so much a better angel of our nature as it is the devil in the mirror.

At the heart of the problem is the fact that engineers are overwhelmingly white and male, and tech is designed in a kind of vacuum, without the benefit of other perspectives. Unconscious bias is baked in, and it can have problematic consequences for tools such as recruitment software, which in some cases has been found to disadvantage certain groups of job applicants. And then there’s the other question — do we even want this? Earlier this year, Amazon won two patents for wristbands that could monitor their workers’ movements, prompting a flurry of headlines with Big Brother references.

“We’re in the foothills of the mountains in this. There are increasingly large numbers of businesses experimenting with and piloting this technology, but not necessarily applying it across the breadth of their enterprises,” says McCargow. “I think that the point is [that although] it might not be having a substantial impact on the workforce yet, it’s important we…think about this now, for the future.”

This future is already here. The Spot app offers a way for people to bolster and clarify their reports of harassment. Ixy can give real-time feedback on mobile chat to help users navigate difficult conversations. Beyond Verbal wants to help humans hear between the lines of conversations. These are just a few examples among a growing number of new technologies designed to help users navigate the eddies and whirlpools of human emotion. There will be more; we can count on it.

With most technology throughout history — the printing press, the textile mills, the telephone — the tectonic shifts for workers were both huge and subtle, shaping a landscape that was changed but not completely unrecognizable. AI, for all its transformative implications, is similar. As McCargow points out, there’s been a lot of concern over machines eroding our humanity, chipping away at what it means to be human. But that doesn’t have to be the case. Done right, machines could help us learn to be better humans.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence