What happens when cybercriminals start to use machine learning?

21Oct - by aiuniverse - 4 - In Artificial Intelligence Machine Learning

Source – computerworlduk.com

Over the last few years, machine learning threat detection and defence company Darktrace has been something of a rising star in the cybersecurity industry. Its core unsupervised machine learning technology lend it the reputation of being one of the best in AI-enabled security. But what exactly do those on the cutting edge of cybersecurity research worry about?

Computerworld UK met with director of cyber analysis at Darktrace, Andrew Tsonchev, at the IP Expo show in London’s Docklands late last month.

“A lot of solutions out there look at previous attacks and try to learn from them, so AI and machine learning are being built around learning from what they’ve seen before,” he said. “That’s quite effective at, say, coming up with a machine learning classifier that can detect banking trojans.”

But what’s the flip-side to that? If vendors are taking artificial intelligence seriously in threat detection, won’t their counterparts in the criminal world consider the same? Are these hackers as sophisticated currently as some of the vendors would have us believe they are?

To understand where machine learning might be useful for attackers, it’s useful to consider some instances where it has demonstrated strong advantages in defence.

Facebook Comments

Comments are closed.