Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Adversarial artificial intelligence: winning the cyber security battle

Source: information-age.com

Cybercriminals are utilising artificial intelligence to launch more effective attacks and it’s time to fight fire with fire, according to Martin Mackay, SVP, EMEA at Proofpoint

Artificial intelligence (AI) has come a long way since its humble beginnings. Once thought to be a technology that would struggle to find its place in the real world, it is now all around us. It’s in our phones, our cars, and our homes. It can influence the ads we see, the purchases we make and the television we watch. It’s also fast becoming firmly embedded in our working lives — particularly in the world of cyber security.

The Capgemini Research Institute recently found that one in five organisations used AI cyber security pre-2019, with almost two-thirds planning to implement it by 2020. The technology is used across the board in the detection and response to cyber attacks.

But as with any advancement in technology, AI is not only used for good. Just as cyber security teams are utilising machine learning to ward off threats, so too are bad actors weaponising the technology to increase the speed, effectiveness and impact of those threats.

We now find ourselves in an arms race. One that we can only win by embracing this rapidly evolving technology as part of a broad, deep defence.

Artificial intelligence in cyber security — defence

There’s no doubt that the cyber security industry is convinced of the worth of artificial intelligence. The AI cyber security market is already valued at $8.8 billion and expected to top $38 billion by 2026.

What started out with fairly simple yet effective use cases, such as the email spam filter, has now expanded across every function of the cyber security team.

Today, AI is a vital line of defence against a wide range of threats, including people-centric attacks such as phishing. Every phishing email leaves behind it a trail of data. This data can be collected and analysed by machine learning algorithms to calculate the risk of potentially harmful emails by checking for known malicious hallmarks.

The level of analysis can also extend to scanning attached files and URLs within the body of a message – and even, thanks to a type of machine learning known as computer vision, to detecting websites that impersonate the login pages of major phishing targets.

The same machine learning model can also be applied to other common threats such as malware – which grows and evolves over time and often does considerable damage before an organisation knows what it’s up against.

Cyber security defences that employ AI can combat such threats with greater speed, relying on data and learnings from previous, similar attacks to predict and prevent its spread. As the technology continues to develop, so too will its prevalence within cyber security defence. Over 70% of organisations are currently testing use cases for AI cyber security for everything from fraud and intrusion detection to risk scoring and user/machine behavioural analysis.

Perhaps the biggest benefit of AI, however, is its speed. Machine learning algorithms can quickly apply complex pattern recognition techniques to spot and thwart attacks much faster than any human.

Artificial intelligence in cyber security — attack

Unfortunately, while AI is making great strides in defending against common threats, it’s making it far easier for cybercriminals to execute them too.

Take phishing: AI has the potential to supercharge this threat, increasing the ease, speed and surface of an attack. Even rudimentary machine learning algorithms can monitor correspondence and credentials within a compromised account. Before long, the AI could mimic the correspondence style of the victim to spread malicious emails far and wide, repeating the attack again and again.

When it comes to malware, AI can facilitate the delivery of highly-targeted, undetectable attacks. IBM’s AI-powered malware proof of concept, DeepLocker, is able to leverage publicly available data to conceal itself from cyber security tools, lying dormant until it reaches its intended target. Once it detects the target — either via facial or voice recognition — it executes its malicious payload.

AI’s speed will also likely prove to be a major boon for cybercriminals, as it is for those of us defending against it. Machine learning could be deployed to circumnavigate and break through cyber security defences faster than most prevention or detection tools could keep up.

And AI will not only exacerbate existing threats – it’s already creating new ones. Sophisticated machine learning techniques can mimic and distort audio and video to facilitate cyber attacks. We have already seen this technology, known as DeepFakes, in the wild. In March of this year, an unknown hacking group used this approach to defraud a UK-based energy subsidiary of over £200,000. The group impersonated the parent company’s CEO to convince the subsidiary managing director to make an urgent transfer to a Hungarian supplier. Convinced he was talking to his boss, the he complied with the request and the money was successfully stolen.

As AI becomes ever-more convincing in its ability to ape human communication, attacks of this nature are likely to become increasingly common.

Winning the AI arms race

When you find yourself in an arms race, the only way to win is to stay ahead. For the cyber security industry, this is nothing new. While the tactics and technologies may have changed, the battle to stay in front has raged for decades.

In this latest standoff, to keep pace with AI-powered threats, we must embrace AI-powered defence. That being said, AI should not be considered the universal panacea.

There’s no doubt that machine learning technology is both sophisticated and incredibly powerful, but it is just one piece of the puzzle.

When it comes to successfully defending against modern cyber attacks, there is no silver bullet – AI or otherwise. A strong defence must be deep, multifaceted and, despite the ‘rise of the machines’, people-centric.

Related Posts

What is AIOps?

AIOps, short for Artificial Intelligence for IT Operations, is a practice that combines artificial intelligence (AI) and machine learning (ML) technologies with traditional IT operations to enhance Read More

Read More

What is Natural Language Processing (NLP) tools?

Introduction to Natural Language Processing (NLP) Tools If you’ve ever asked Siri a question or talked to Alexa, you’ve used Natural Language Processing (NLP) tools. In essence, Read More

Read More

What are Emotion Detection Tools and Why Emotion Detection Tools are Important?

What are Emotion Detection Tools? Emotion detection tools are a type of technology that analyses human facial expressions, voice tone, and body language to determine the emotional Read More

Read More

What is Sentiment Analysis and what are the Types of Sentiment Analysis and its Important?

Introduction to Sentiment Analysis If you’re a business owner, marketer, or just someone who’s curious about what people think about your brand, then you’ve probably heard of Read More

Read More

What is Object Detection and Why is Object Detection Important?

Introduction to Object Detection Tools Object detection is the process of identifying and locating objects of interest in an image or video. Object detection tools are software Read More

Read More

What is Face Recognition and Why is Face Recognition Important?

Introduction to Face Recognition Tools We’ve all heard of facial recognition technology, but what exactly is it and why is it important? From unlocking your phone with Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x