How can artificial intelligence help in the fight to remain secure?

16Oct - by aiuniverse - 0 - In Artificial Intelligence

Source: itpro.co.uk

Artificial intelligence (AI) has gone from being science fiction to an increasingly common part of our lives. TV streaming services use AI and machine learning (ML) to make recommendations of what you might like to watch next, for example, while other AI programmes can carry out rapid trading on the stock market without human intervention.

The information security sector hasn’t been left untouched by this trend, either. Increasingly, an AI element in cyber security technology is seen less as a nice-to-have and more as an essential part of the package.

“AI [has become] an expected feature within cyber security products and services,” says Jeff Pollard, Vice President & Principal Analyst at Forrester. “It’s now not a distinguishing characteristic or, you know, something that’s outside of the norm, it’s fully expected to be in there.”

It’s not just in defensive software that AI is playing a role, either. Microsoft has developed an AI-powered tool that can help developers spot bugs in their code with a claimed accuracy rate of 99%. In theory, this could eliminate a large portion of the software security flaws a malicious actor could exploit at the point of their creation.

An unsleeping sentinel

For those on the front lines of cyber defenses, AI is fast becoming a game changer.

Craig York, CTO at Milton Keynes University Hospitals NHS Trust, has found that AI is a vital tool in his cybersecurity arsenal. He cites the 2017 WannaCry crisis as a turning point for the IT community when it comes to security.

“WannaCry made security a board-level discussion,” York tells IT Pro.

It was at around this time that he was introduced to Darktrace, a company specialising in AI-enabled security software, by a colleague at West Suffolk NHS Foundation Trust, which was already a customer.

“Humans can only do so much,” explains York. “We have three people in our cyber security team and while they’re very capable and very diligent, they’re human beings; they take breaks, they have a cup of tea. They need lunch, and they go home at the end of the working day.

“Having the latest and greatest patches doesn’t necessarily defend against everything that’s out there at the moment. And, if anything, some of our cybersecurity attacks are coming from other parts of the world that are doing business, effectively, when we aren’t at the hospital.”

He says that it’s in this area of cyber defence that AI comes into its own.

“We need security technologies that are going to provide a safer hospital. 24 hours a day. 365 days a year, at weekends and bank holidays. The AI technology that we use from Darktrace provides some level of that – it never sleeps, so if the AI thinks that something is happening on the network that shouldn’t. It can take action straightaway.”

While AI is starting to inhabit a critical role in cyber security, particularly as IT departments and organisations as a whole adapt to the hyper-accelerated digital transformation brought about by the COVID-19 pandemic, it pays for IT leaders to think carefully about what problems they need to solve, rather than plumbing for anything labelled AI.

“The problem most cyber security vendors have is that it is just that it’s a buzzword, they can’t actually explain what they’re doing with AI – or machine learning for that matter,” cautions Pollard.

He explains that while “there are definitely use cases for AI within cyber security”, it’s not something that can – or should – be applied to everything. 

“The most productive and proven use cases for AI in cyber security are really on the detection side,” he says. “So being able to help identify, you know, malware-associated clustering of activity and behaviour. That’s really an area where it landed and it made a lot of sense.

“What we haven’t seen yet is AI expanded beyond that to more differentiated use cases, or use cases that are not just based on identifying bad things.”

Turning the tables

It’s often argued, especially by those in the tech industry, that technology is neutral. Being passive and unable to act of its own accord, it’s how it is used that is good or bad, rather than the tool itself. In this, AI is no exception.

While it’s become a key component of organisations’ cyber defence strategy, hackers and other malicious actors are also starting to use AI to craft better attacks. One example is pulling together a convincing spear-phishing email, as it’s able to research more thoroughly and more rapidly than humans. Dr Roman Yampolskiy, an associate professor at the University of Louisville, Kentucky, in the department of Computer Engineering and Computer Science and director of the university’s Cybersecurity Laboratory, has claimed the quality of such emails would be so high that “even cybersecurity experts will fall for them”.

“AI is dual use technology used by both attackers and defenders,” he tells IT Pro. “In recent years AI has become capable of finding novel exploits.”

And while others point to streamlining operations in security departments, Yampolskiy sees another long term possibility: “Like in all other fields AI will eventually fully automate all aspects of the job. Given that both attackers and defenders use AI, it will become an arms race between their AIs.”

For now, though, it’s fair to say that while organisations should be realistic in their expectations of what AI can do, incorporating it into your cyber defences is quickly becoming best practice.

Facebook Comments