Seven Ways Cybercriminals Can Use Machine Learning
Source – forbes.com
Ben Gurion, the main international airport in Israel, is one of the most protected airports in the world. It is known for its multilayered security. On the way from the office to the airport, you get caught in the lens of airport cameras. The road curves several kilometers to the terminal, and when you are driving, the security system has enough time to analyze your identity. In case of any signs of danger, you will be intercepted. The system of behavior anomalies analysis in computer systems works the same way. The implementation of these systems is effective in defense. While a perpetrator is running certain commands, an AI-based system can stave off any damage, having identified an intrusion.
AI deployment is not so rosy in the world of cybersecurity. Hackers move forward and adopt it as well. The U.S. intelligence community reports that artificial intelligence actually works in cybercriminals’ favor.
Let’s go over a few areas for hackers deploying machine learning and find out which cybersecurity measures should be taken.
Every single breach starts with data gathering. Hackers maximize the chances of success by gaining more information. They classify users and select a potential victim thoroughly using several classification and clustering methods. This task can be automated.
How can you protect yourself from being their victim? It goes without saying that your personal information must not be available in open sources, so you should not publish an awful lot of information about yourself on social networks.
Neural networks can be trained to create spams that resemble a real email. However, in order for this to work, it is better to know the sender’s behavior. This can be achieved through network phishing that provides hackers with easy access to personal information. Research from BlackHat about automated spearphishing on Twitter proves this idea. This tool can increase the success of phishing campaigns up to 30% — which is twice as much as traditional automation and similar to manual phishing.
How can you protect yourself from phishing? You could just mail a question to a sender. Hackers have become savvier, however, and can analyze your message and respond appropriately so that you are sure that the account is not compromised. Nowadays systems are not complicated but it will not be long before smart chat bots communicate with you like your friends do.
The most actionable recommendation is to ask the user through other channels and messengers if he or she sent the message. There is little chance that several of his or her accounts are compromised at once.
The new generation of AI-based companies like Lyrebird can create fake audio files and videos that can mimic any voice. It can help perpetrators in social engineering.
Frankly speaking, it seems nothing can protect you from these wild tricks, as believing that everything that is written or spoken is fabricated undermines confidence in all the information you receive.
A simple captcha test can be automatically resolved. Some computers promise over 98% accuracy. “I’m Not a Human: Breaking the Google reCAPTCHA” is a fascinating paper that was delivered at a BlackHat conference.
How can you protect yourself? Object recognition captchas are dead. If you choose a captcha for your website, it is better to try MathCaptcha or its alternatives.
Password brute force is yet another area where cybercriminals can deploy machine learning. You might hear about a neural network that generates texts based on the trained texts. You can give this network, say, a list of Eminem’s songs, and it will create a new song.
The same idea can have wide applicability to generating passwords. Researchers at MIT have taken this approach, applied it to passwords and received good results. An approach that was mentioned in one of the latest papers called “PassGAN” represents GANs (Generative Adversarial Networks) to generate passwords. Cybercriminals consider this idea a more promising one after recent reporting from 4IQ suggesting the existence of a database of 1.4 billion passwords from all breaches.
Use complicated passwords and exclude simple ones. Avoid those from the database. The only secure random passwords are those built on shortened sentences and mixed with special characters.
In 2017, the first publicly known example of AI for malware creation was proposed at Peking University in Beijing, when the authors created a MalGAN network.
It resembles our reality, where viruses mutate resulting in new flu epidemics. What counts here is that people who care about their health catch them less. The same happens with computers. Regular hygiene, or in the online, never visiting insecure sites, saves people from viruses most of the time.
Savvy hackers apply machine learning to other areas. In certain criminal tasks, there something called Hivenet, which refers to smart botnets. If cybercriminals manage botnets manually, Hivenets can change behavior depending on circumstances. They resemble parasites living in devices and deciding who will be next to use victims’ resources.
It is essential to change a default password to protect IoT devices from most attacks.
The ideas above are only some examples of the ways hackers can use machine learning.
Aside from using more secure passwords and being more careful while following third-party websites, I can only advise paying attention to security systems based on AI in order to be ahead of perpetrators. A year or two ago, everyone had a skeptical attitude toward the use of artificial intelligence. Today’s research findings and its implementation in products prove that AI actually works, and it’s here to stay.