THE ‘PERSONALITY’ IN ARTIFICIAL INTELLIGENCE

Source: pursuit.unimelb.edu.au

The rise of ‘deep learning’ has caused a lot of excitement around the revolutionary capabilities of these artificially intelligent agents.

But it’s also raised fear and suspicion about what exactly is going on inside each algorithm.

One way for us to gain some understanding of our silicon-based friends (or foes?) is for them to disclose their framework of decision-making in a way that we humans can understand – by using the concept of personality.

My research explores how some of these deep learning agents can be better understood through their ‘personalities’ – like whether they are ‘greedy’, ‘selfish’ or ‘prudent’.

THE FOURTH INDUSTRIAL REVOLUTION

­­­­We are now at the dawn of a new era in AI technology – a so-called fourth industrial revolution that will reshape every industry.

At the forefront is deep learning, the technology responsible for the recent leap in AI capabilities.

Deep learning has shown remarkable progress in meeting or surpassing human-level performance in tasks typically thought to require human intelligence. In fact, it’s already used for the diagnosis of cancers, predictions of some’s suitability for parole and many other high-stakes functions.

The great strength of deep learning lies in its ability to digest vast amounts of data that are used to identify patterns and make predictions.

However, deep learning’s extraordinary capacity has also raised fears and concerns.

THE ‘BLACK BOX’

One of the main criticisms of deep learning is that the technology is a ‘black box’ – no one knows or can explain exactly how deep learning agents arrive at their decisions.

Even the AI developers who create these agents cannot explain exactly how they work.

The ‘black box’ issue poses a particular challenge for regulators seeking to manage the provision of services by deep learning agents to the public.

With older forms of AI – algorithms that were based on decision trees or decision rules designed by humans – regulators could assess the rationale of those rules according to conventional wisdom.

Basically, these older forms of AI were less intelligent, as the AI agent was mainly implementing a set of rules given to it by its human developer.

With deep learning, regulators cannot review the rationale or rules behind the algorithms, as neither is actually intelligible to humans – there isn’t a defined body of knowledge or rules of reasoning humans can comprehend.

Essentially, deep learning agents use pre-existing data to find patterns and predict future outcomes based on those patterns. Precisely how they arrive at a prediction isn’t truly known.

So, if we are handing over important societal decisions to an artificially intelligent agent, we need to know if we can trust the technology.

THE ‘PERSONALITY’ BEHIND DEEP LEARNING

What the public and regulators need is reassurance that there is a way for everyone to have some understanding of deep learning agents, and that there is some level of predictability in their behaviour.

Deep learning agents can be understood as having ‘personality traits’ which can give the public and regulators alike more of an understanding about how they will behave.

An AI developer can control the ‘personality’ of the deep learning agent by setting positive rewards and negative punishments for particular actions.

Earlier research has already shown us that by manipulating the rewards and punishments, AI developers could control whether their agent exhibits purely competitive behaviour, purely cooperative behaviour or variations of both.

PREDICTABLE AND ACCEPTABLE BEHAVIOUR

Take, for example, a deep learning agent that’s been developed to provide financial advice without any input from a human.

The agent can be controlled by the developer through rewards and punishments – how ‘greedy’ or ‘prudent’ the agent is in balancing the pursuit of immediate financial gains versus long-term growth of a portfolio.

The agent can also be controlled in terms of its appetite for risk – how much of a financial loss it is willing to risk for financial gains.

But before any of these services are offered to the public, AI developers should give a plain-language description of the basic personality traits of a deep learning agent.

The goal of disclosing the agent’s ‘personality’ is to allow a person without any knowledge of AI technology to have a meaningful understanding of the likely behaviour of the agent.

This would allow consumers to make informed choices regarding which agent is suitable for them. It also assists regulators in identifying agents that pose a higher risk to the public, enabling regulators to use their resources in a more targeted fashion.

As deep learning agents take over increasingly sophisticated tasks with serious social consequences, people need reassurance that these agents will behave in predictable and acceptable ways.

Describing deep learning agents using the intuitive concept of personality traits would allow the public to respond based on knowledge and understanding.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence