Why Artificial Intelligence is Important for Cyber-Security

Source – eweek.com

There is a lot of hype around artificial Intelligence, and while the technology can be useful, it does have limitations, according to RSA CTO Zulfikar Ramzan.

Speaking at the Dell Technologies Experience at the South by South West (SXSW) event in Austin, Texas, on March 12, Ramzan detailed his views on AI in a session titled “AI: Boon or a Boondoggle?”

“There is a tendency to think of AI as this all-encompassing panacea that can solve any problem,” he said.

Ramzan explained that AI can be somewhat of an abstract concept. What it basically means is that computers can be trained to be intelligent with certain kinds of tasks. Within AI, there is the subfield of machine learning, which he said is often used by people interchangeably with AI. Machine learning was first defined in 1959 by mathematician Arthur Samuelson as “the “field of study that gives computers the ability to learn without being explicitly programmed,” Ramzan said.

Machine learning enables computers to learn from data. As such, Ramzan said that if an organization has an interesting data set, it can use a machine learning algorithm to analyze the data and make inferences about the data set to gain meaningful insights that can aid different decision-making processes.

Cyber-Security

Machine learning has a very strong use case inside of cyber-security, according to Ramzan.

“Cyber-security is about making intelligent decisions based on what is good and what is bad, based on the data that you have in front of you,” he said. “That’s a problem that is suited to machine learning techniques.”

For example, if an individual gets an email, it’s possible to determine if it is spam based on machine learning techniques, he said. Ramzan explained that spam filtering technologies look for things such as word patterns, where an email sent from and other reputation characteristics. In addition, machine learning techniques can be used to look at historical data on emails to help determine the rules needed to identify spam.

Machine learning is also playing a role in online fraud detection. Ramzan said machine learning techniques can be used to look at buying patterns and transaction data to understand what a typical transaction is for a given user, which can aid in spotting fraud.

Malware detection is another area where machine learning techniques can be helpful. Ramzan said malware tends to exhibit certain behaviors that are different from legitimate software. He noted that RSA was able to use machine learning to determine that one of its government customers was being attacked by malware from another nation-state.

“You can actually identify things that would be otherwise unknown,” Ramzan said. “There are some great applications of AI and machine learning in the area of cyber-security.”

Pitfalls and Challenges

AI and machine learning technologies still tend to require some level of human input. Ramzan said human experts in a given domain of analysis are still needed to help configure a machine learning algorithm to have the right classifications and feature identifiers to analyze data.

Beyond some level of human intervention, the most critical part of machine learning in Ramzan’s view is the data.

“People get so caught in the cool math, but they forget if you don’t have good data to begin with, nothing else matters—it’s just garbage in, garbage out,” he said.

Data has to be representative of what will actually be encountered in real life. Ultimately, people have to ask the right questions of the right data; otherwise, they won’t get the correct answers, Ramzan said.

“You can’t make good wine from bad grapes,” he said.

Another challenge identified by Ramzan is class imbalance in data sets. That is, most things in data sets are not bad. For example, the majority of credit card transactions are not fraudulent and most files on a computer are legitimate. With the high volume of legitimate items, Ramzan said there is risk of identifying false positives with machine learning that needs to be avoided.

Adversaries Adapt

There are also few fixed rules when it comes to dealing with agile cyber-security adversaries, in Ramzan’s view.

“We’re dealing with sentient adversaries, people that will adapt, figure out what’s going on and make changes,” he said.

Ramzan noted that in his experience, machine learning algorithms typically don’t assume adversarial scenarios where threats are actively trying to sabotage the algorithm. He added that dealing with active threat adversaries that are highly agile is still an area that machine learning technologies are struggling with.

“Marketing people won’t tell you this, but the reality is machine learning algorithms weren’t designed to deal with bad people. They were designed to deal with legitimate data sets they can learn from,” Ramzan said.

In his view, AI and machine learning techniques are good at understanding what the norm is, but they are not always as good at figuring out things that are completely beyond an individual’s comprehension.

“These techniques [AI and machine learning], while powerful and useful, are not a panacea and they are not going to catch every kind of threat out there,” he said.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence