<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>cybercriminals Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/cybercriminals/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/cybercriminals/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 17 Nov 2020 05:09:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Cybercriminals Use Cloud Technology To Accelerate Business Attacks</title>
		<link>https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/</link>
					<comments>https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 17 Nov 2020 05:09:02 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12350</guid>

					<description><![CDATA[<p>Source: aithority.com Trend Micro Incorporated, the leader in cloud security, has identified a new class of cybercrime. Criminals are using cloud services and technology to speed up attacks, which decreases the <a class="read-more-link" href="https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/">Cybercriminals Use Cloud Technology To Accelerate Business Attacks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>Trend Micro Incorporated, the leader in cloud security, has identified a new class of cybercrime. Criminals are using cloud services and technology to speed up attacks, which decreases the amount of time enterprises have to identify and respond to a breach.</p>



<p>Trend Micro Research found terabytes of internal business data and logins for popular providers like Amazon, Google, Twitter, Facebook, and PayPal offered for sale on the dark web. This data is sold via access to the cloud logs in which it is stored. This results in more stolen accounts being monetized, and the time from initial data theft to stolen information being used against an enterprise has decreased from weeks to days or hours.</p>



<p>“The new market for access to cloud logs ensures stolen information can be used more quickly and effectively by the cybercrime community—that’s bad news for enterprise security teams,” said Robert McArdle, director of forward-looking threat research for Trend Micro. “This new cybercriminal market shows how criminals are using cloud technologies to compromise you. Which also means a business is not exempt from this attack method if they only use on-prem services. All organizations will need to double down on preventative measures and ensure they have the visibility and controls needed to react fast to any incidents that occur.”</p>



<p>Once access is purchased for logs of cloud-based stolen data, the purchaser will use the information for secondary infection. For example, Remote Desktop Protocol (RDP) credentials can be found in these logs and are a popular entry point for criminals targeting enterprises with ransomware.</p>



<p>Storing terabytes of stolen data in cloud environments has similar appeal for criminal businesses as it does for legitimate organizations. Cloud storage offers scalability and speed that provides more computing power and bandwidth to optimize operations.</p>



<p>Access to these logs of cloud data are often sold on a subscription basis for as much as&nbsp;$1,000&nbsp;per month. Access to a single log can include millions of records, and higher prices are earned for frequently updated data sets or the promise of relative exclusivity.</p>



<p>With ready access to data in this way, cybercriminals can streamline and accelerate execution of attacks and potentially expand their number of targets. The result is to optimize cybercrime by ensuring threat actors who specialize in specific areas—say cryptocurrency theft, or e-commerce fraud—can get access to the data they need: quickly, easily and relatively cheaply.</p>



<p>The Trend Micro report warns that in the future, such activity could even give rise to a new type of cybercriminal—an expert in data mining who uses machine learning to enhance pre-processing and extraction of information to maximize its usefulness to buyers. The overall trend will be towards standardization of services and pricing, as the industry matures and professionalizes.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/">Cybercriminals Use Cloud Technology To Accelerate Business Attacks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/cybercriminals-use-cloud-technology-to-accelerate-business-attacks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Lifting the cyber security of the Internet of Things: voluntary Code of Practice</title>
		<link>https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/</link>
					<comments>https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 04 Sep 2020 07:41:32 +0000</pubDate>
				<category><![CDATA[Internet of things]]></category>
		<category><![CDATA[cyber security]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[government]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11367</guid>

					<description><![CDATA[<p>Source: minister.defence.gov.au The Morrison Government has today released a voluntary Code of Practice to improve the security of the Internet of Things (IoT) in Australia – including <a class="read-more-link" href="https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/">Lifting the cyber security of the Internet of Things: voluntary Code of Practice</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: minister.defence.gov.au</p>



<p>The Morrison Government has today released a voluntary Code of Practice to improve the security of the Internet of Things (IoT) in Australia – including everyday devices such as smart fridges, smart televisions, baby monitors and security cameras.</p>



<p>Minister for Home Affairs Peter Dutton said cyber security has never been more important to Australia’s economic prosperity.</p>



<p>“Internet-connected devices are increasingly part of Australian homes and businesses and many of these devices have poor security features that expose owners to compromise,” Mr Dutton said.</p>



<p>“Manufacturers should be developing these devices with security built in by design.</p>



<p>“Australians should be considering security features when purchasing these devices to protect themselves against unsolicited access by cybercriminals.”</p>



<p>Minister for Defence Senator the Hon Linda Reynolds CSC said the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) has today also released quick and easy tips to help Australian consumers protect themselves against cyber threats when buying and using internet-connected devices.</p>



<p>“Boosting the security and integrity of internet connected devices is critical to ensuring that the benefits and conveniences they provide can be enjoyed without falling victim to cybercriminals,” Minister Reynolds said.</p>



<p>When purchasing and setting up an IoT device, some of the questions families and businesses should ask are:</p>



<ol class="wp-block-list"><li>Is the device made by a well-known reputable company and sold by a well-known reputable company?</li><li>Is it possible to change the password?</li><li>Does the manufacturer provide updates?</li><li>What data will the device collect and who will the data be shared with?</li></ol>



<p>The ACSC has also produced guidance for manufacturers on how to implement the loT Code of Practice.</p>



<p>The Code of Practice is a key deliverable as part of the 2020 Cyber Security Strategy and has been developed in close partnership with industry following nation-wide consultation earlier this year. It outlines the cyber security features the Government expects of internet-connected devices available in Australia.</p>



<p>The Code of Practice also aligns and builds upon guidance provided by the United Kingdom, and is consistent with other international standards.</p>



<p>The Australian Government will continue to explore further initiatives for lifting the security of the Internet of Things and making Australia the safest place to connect online.</p>
<p>The post <a href="https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/">Lifting the cyber security of the Internet of Things: voluntary Code of Practice</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/lifting-the-cyber-security-of-the-internet-of-things-voluntary-code-of-practice/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Advanced artificial intelligence will evolve shifting the traditional advantage of the cybercriminal</title>
		<link>https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/</link>
					<comments>https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 26 Nov 2019 10:55:09 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5408</guid>

					<description><![CDATA[<p>Source: dqindia.com Fortinet unveiled predictions from the FortiGuard Labs team about the threat landscape for 2020 and beyond. These predictions reveal methods that Fortinet anticipates cybercriminals will <a class="read-more-link" href="https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/">Advanced artificial intelligence will evolve shifting the traditional advantage of the cybercriminal</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>Fortinet unveiled predictions from the FortiGuard Labs team about the threat landscape for 2020 and beyond. These predictions reveal methods that Fortinet anticipates cybercriminals will employ in the near future, along with important strategies that will help organizations protect against these oncoming attacks.</p>



<p>Michael Joseph, director system engineering, India &amp; SAARC, Fortinet, said: “Much of the success of cyber adversaries has been due to the ability to take advantage of the expanding attack surface and the resulting security gaps due to digital transformation. Most recently, their attack methodologies have become more sophisticated by integrating the precursors of AI and swarm technology. Luckily, this trajectory is about to shift, if more organizations use the same sorts of strategies to defend their networks that criminals are using to target them. This requires a unified approach that is broad, integrated, and automated to enable protection and visibility across network segments as well as various edges, from IoT to dynamic-clouds.”</p>



<h4 class="wp-block-heading">Changing the trajectory of cyberattacks</h4>



<p>Cyberattack methodologies have become more sophisticated in recent years magnifying their effectiveness and speed. This trend looks likely to continue unless more organizations make a shift as to how they think about their security strategies. With the volume, velocity, and sophistication of today’s global threat landscape, organizations must be able to respond in real time at machine speed to effectively counter aggressive attacks. Advances in artificial intelligence and threat intelligence will be vital in this fight.</p>



<h4 class="wp-block-heading">The evolution of artificial intelligence as a System</h4>



<p>One of the objectives of developing security-focused artificial intelligence (AI) over time has been to create an adaptive immune system for the network similar to the one in the human body. The first generation of AI was designed to use machine learning models to learn, correlate and then determine a specific course of action. The second generation of AI leverages its increasingly sophisticated ability to detect patterns to significantly enhance things like access control by distributing learning nodes across an environment. The third generation of AI is where rather than relying on a central, monolithic processing center, AI will interconnect its regional learner nodes so that locally collected data can be shared, correlated, and analyzed in a more distributed manner. This will be a very important development as organizations look to secure their expanding edge environments.</p>



<h4 class="wp-block-heading">Federated machine learning</h4>



<p>In addition to leveraging traditional forms of threat intelligence pulled from feeds or derived from internal traffic and data analysis, machine learning will eventually rely on a flood of relevant information coming from new edge devices to local learning nodes. By tracking and correlating this real-time information, an AI system will not only be able to generate a more complete view of the threat landscape, but also refine how local systems can respond to local events. AI systems will be able to see, correlate, track, and prepare for threats by sharing information across the network. Eventually, a federated learning system will allow data sets to be interconnected so that learning models can adapt to changing environments and event trends and so that an event at one point improves the intelligence of the entire system.</p>



<h4 class="wp-block-heading">Combining AI and playbooks to predict attacks</h4>



<p>Investing in AI not only allows organizations to automate tasks, but it can also enable an automated system that can look for and discover attacks, after the fact, and before they occur. Combining machine learning with statistical analysis will allow organizations to develop customized action planning tied to AI to enhance threat detection and response. These threat playbooks could uncover underlying patterns that enable the AI system to predict an attacker’s next move, forecast where the next attack is likely to occur, and even determine which threat actors are the most likely culprits. If this information is added into an AI learning system, remote learning nodes will be able to provide advanced and proactive protection, where they not only detect a threat, but also forecast movements, proactively intervene, and coordinate with other nodes to simultaneously shut down all avenues of attack.</p>



<h4 class="wp-block-heading">The opportunity in counterintelligence and deception</h4>



<p>One of the most critical resources in the world of espionage is counterintelligence, and the same is true when attacking or defending an environment where moves are being carefully monitored. Defenders have a distinct advantage with access to the sorts of threat intelligence that cybercriminals generally do not, which can be augmented with machine learning and AI. The use of increased deception technologies could spark a counterintelligence retaliation by cyber adversaries. In this case, attackers will need to learn to differentiate between legitimate and deceptive traffic without getting caught simply for spying on traffic patterns. Organizations will be able to effectively counter this strategy by adding playbooks and more pervasive AI to their deception strategies. This strategy will not only detect criminals looking to identify legitimate traffic, but also improve the deceptive traffic so it becomes impossible to differentiate from legitimate transactions. Eventually, organizations could respond to any counterintelligence efforts before they happen, enabling them to maintain a position of superior control.</p>



<h4 class="wp-block-heading">Tighter integration with law enforcement</h4>



<p>Cybersecurity has unique requirements related to things like privacy and access, while cybercrime has no borders. As a result, law enforcement organizations are not only establishing global command centers but have also begun connecting them to the private sector, so they are one step closer to seeing and responding to cybercriminals in real-time. A fabric of law enforcement, as well as public and private sector relationships, can help in terms of identifying and responding to cybercriminals. Initiatives that foster a more unified approach to bridge the gaps between different international and local law enforcement agencies, governments, businesses, and security experts will help expedite the timely and secure exchange of information to protect critical infrastructure and against cybercrime.</p>



<h4 class="wp-block-heading">Cyber adversary sophistication is not slowing down</h4>



<p>Changes in strategy will not go without a response from cyber adversaries. For networks and organizations using sophisticated methods to detect and respond to attacks, the response might be for criminals to attempt to reply with something even stronger. Combined with more sophisticated attack methods, the expanding potential attack surface, and more intelligent, AI-enabled systems, cybercriminal sophistication is not decreasing.</p>



<h4 class="wp-block-heading">Advanced evasion techniques</h4>



<p>A recent Fortinet Threat Landscape report demonstrates a rise in the use of advanced evasion techniques designed to prevent detection, disable security functions and devices, and operate under the radar using living off the land (LOTL) strategies by exploiting existing installed software and disguising malicious traffic as legitimate. Many modern malware tools already incorporate features for evading antivirus or other threat detection measures, but cyber adversaries are becoming more sophisticated in their obfuscation and anti-analysis practices to avoid detection. Such strategies maximize weaknesses in security resources and staffing.</p>



<h4 class="wp-block-heading">Swarm technology</h4>



<p>Over the past few years, the rise of swarm technology, which can leverage things like machine learning and AI to attack networks and devices has shown new potential. Advances in swarm technology, have powerful implications in the fields of medicine, transportation, engineering, and automated problem solving. However, if used maliciously, it may also be a game changer for adversaries if organizations do not update their security strategies. When used by cybercriminals, bot swarms could be used to infiltrate a network, overwhelm internal defenses, and efficiently find and extract data. Eventually, specialized bots, armed with specific functions, will be able to share and correlate intelligence gathered in real-time to accelerate a swarm’s ability to select and modify attacks to compromise a target, or even multiple targets simultaneously.</p>



<h4 class="wp-block-heading">Weaponizing 5G and edge computing</h4>



<p>The advent of 5G may end up being the initial catalyst for the development of functional swarm-based attacks. This could be enabled by the ability to create local, ad hoc networks that can quickly share and process information and applications. By weaponizing 5G and edge computing, individually exploited devices could become a conduit for malicious code, and groups of compromised devices could work in concert to target victims at 5G speeds. Given the speed, intelligence, and localized nature of such an attack, legacy security technologies could be challenged to effectively fight off such a persistent strategy.</p>



<h4 class="wp-block-heading">A Change in how cybercriminals use zero-day attacks</h4>



<p>Traditionally, finding and developing an exploit for a zero-day vulnerability was expensive, so criminals typically hoard them until their existing portfolio of attacks is neutralized. With the expanding attack surface, an increase in the ease of discovery, and as a result, in the volume of potentially exploitable zero-day vulnerabilities is on the horizon. Artificial Intelligence fuzzing and zero-day mining have the ability to exponentially increase the volume of zero-day attacks as well. Security measures will need to be in place to counter this trend.</p>
<p>The post <a href="https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/">Advanced artificial intelligence will evolve shifting the traditional advantage of the cybercriminal</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/advanced-artificial-intelligence-will-evolve-shifting-the-traditional-advantage-of-the-cybercriminal/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Seven Ways Cybercriminals Can Use Machine Learning</title>
		<link>https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 12 Jan 2018 05:13:17 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cybercrime]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1967</guid>

					<description><![CDATA[<p>Source &#8211; forbes.com Ben Gurion, the main international airport in Israel, is one of the most protected airports in the world. It is known for its multilayered security. <a class="read-more-link" href="https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/">Seven Ways Cybercriminals Can Use Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; forbes.com</p>
<p>Ben Gurion, the main international airport in Israel, is one of the most protected airports in the world. It is known for its multilayered security. On the way from the office to the airport, you get caught in the lens of airport cameras. The road curves several kilometers to the terminal, and when you are driving, the security system has enough time to analyze your identity. In case of any signs of danger, you will be intercepted. The system of behavior anomalies analysis in computer systems works the same way. The implementation of these systems is effective in defense. While a perpetrator is running certain commands, an AI-based system can stave off any damage, having identified an intrusion.</p>
<p>AI deployment is not so rosy in the world of cybersecurity. Hackers move forward and adopt it as well. The U.S. intelligence community reports that artificial intelligence actually works in cybercriminals&#8217; favor.</p>
<p>Let&#8217;s go over a few areas for hackers deploying machine learning and find out which cybersecurity measures should be taken.</p>
<p><strong>Data Gathering</strong></p>
<p>Every single breach starts with data gathering. Hackers maximize the chances of success by gaining more information. They classify users and select a potential victim thoroughly using several classification and clustering methods. This task can be automated.</p>
<p>How can you protect yourself from being their victim? It goes without saying that your personal information must not be available in open sources, so you should not publish an awful lot of information about yourself on social networks.</p>
<p><strong>Phishing</strong></p>
<p>Neural networks can be trained to create spams that resemble a real email. However, in order for this to work, it is better to know the sender’s behavior. This can be achieved through network phishing that provides hackers with easy access to personal information. Research from BlackHat about automated spearphishing on Twitter proves this idea. This tool can increase the success of phishing campaigns up to 30% &#8212; which is twice as much as traditional automation and similar to manual phishing.</p>
<p>How can you protect yourself from phishing? You could just mail a question to a sender. Hackers have become savvier, however, and can analyze your message and respond appropriately so that you are sure that the account is not compromised. Nowadays systems are not complicated but it will not be long before smart chat bots communicate with you like your friends do.</p>
<p>The most actionable recommendation is to ask the user through other channels and messengers if he or she sent the message. There is little chance that several of his or her accounts are compromised at once.</p>
<p><strong>Voice Fabrication</strong></p>
<p>The new generation of AI-based companies like Lyrebird can create fake audio files and videos that can mimic any voice. It can help perpetrators in social engineering.</p>
<p>Frankly speaking, it seems nothing can protect you from these wild tricks, as believing that everything that is written or spoken is fabricated undermines confidence in all the information you receive.</p>
<p><strong>CAPTCHA Bypass</strong></p>
<p>A simple captcha test can be automatically resolved. Some computers promise over 98% accuracy. “I’m Not a Human: Breaking the Google reCAPTCHA” is a fascinating paper that was delivered at a BlackHat conference.</p>
<p>How can you protect yourself? Object recognition captchas are dead. If you choose a captcha for your website, it is better to try MathCaptcha or its alternatives.</p>
<p>Password brute force is yet another area where cybercriminals can deploy machine learning. You might hear about a neural network that generates texts based on the trained texts. You can give this network, say, a list of Eminem’s songs, and it will create a new song.</p>
<p>The same idea can have wide applicability to generating passwords. Researchers at MIT have taken this approach, applied it to passwords and received good results. An approach that was mentioned in one of the latest papers called “PassGAN” represents GANs (Generative Adversarial Networks) to generate passwords. Cybercriminals consider this idea a more promising one after recent reporting from 4IQ suggesting the existence of a database of 1.4 billion passwords from all breaches.</p>
<p>Use complicated passwords and exclude simple ones. Avoid those from the database. The only secure random passwords are those built on shortened sentences and mixed with special characters.</p>
<p><strong>Malware</strong></p>
<p>In 2017, the first publicly known example of AI for malware creation was proposed at Peking University in Beijing, when the authors created a MalGAN network.</p>
<p>It resembles our reality, where viruses mutate resulting in new flu epidemics. What counts here is that people who care about their health catch them less. The same happens with computers. Regular hygiene, or in the online, never visiting insecure sites, saves people from viruses most of the time.</p>
<p><strong>Cybercrime Automation</strong></p>
<p>Savvy hackers apply machine learning to other areas. In certain criminal tasks, there something called Hivenet, which refers to smart botnets. If cybercriminals manage botnets manually, Hivenets can change behavior depending on circumstances. They resemble parasites living in devices and deciding who will be next to use victims’ resources.</p>
<p>It is essential to change a default password to protect IoT devices from most attacks.</p>
<p><strong>Conclusion</strong></p>
<p>The ideas above are only some examples of the ways hackers can use machine learning.</p>
<p>Aside from using more secure passwords and being more careful while following third-party websites, I can only advise paying attention to security systems based on AI in order to be ahead of perpetrators. A year or two ago, everyone had a skeptical attitude toward the use of artificial intelligence. Today’s research findings and its implementation in products prove that AI actually works, and it&#8217;s here to stay.</p>
<p>The post <a href="https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/">Seven Ways Cybercriminals Can Use Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/seven-ways-cybercriminals-can-use-machine-learning/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>How Machine Learning Can Help Identify Cyber Vulnerabilities</title>
		<link>https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/</link>
					<comments>https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 14 Dec 2017 05:49:23 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cyber security]]></category>
		<category><![CDATA[Cyber Vulnerabilities]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1888</guid>

					<description><![CDATA[<p>Source &#8211; hbr.org People are undoubtedly your company’s most valuable asset. But if you ask cybersecurity experts if they share that sentiment, most would tell you that people <a class="read-more-link" href="https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/">How Machine Learning Can Help Identify Cyber Vulnerabilities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; hbr.org</p>
<p>People are undoubtedly your company’s most valuable asset. But if you ask cybersecurity experts if they share that sentiment, most would tell you that people are your biggest liability.</p>
<p>Historically, no matter how much money an organization spends on cybersecurity, there is typically one problem technology can’t solve: humans being human.  Gartnerexpects worldwide spending on information security to reach $86.4 billion in 2017, growing to $93 billion in 2018, all in an effort to improve overall security and education programs to prevent humans from undermining the best-laid security plans. But it’s still not enough: human error continues to reign as a top threat.</p>
<p>According to IBM’s Cyber Security Intelligence Index, a staggering 95% of all security incidents involve human error. It is a shocking statistic, and for the most part it’s due to employees clicking on malicious links, losing or getting their mobile devices or computers stolen, or network administrators making simple misconfigurations. We’ve seen a rash of the latter problem recently with more than a billion records exposed so far this year due to misconfigured servers. Organizations can count on the fact that mistakes will be made, and that cybercriminals will be standing by, ready to take advantage of those mistakes.</p>
<p>So how do organizations not only monitor for suspicious activity coming from the outside world, but also look at the behaviors of their employees to determine security risks? As the adage goes, “to err is human” — people are going to make mistakes. So we need to find ways to better understand humans, and anticipate errors or behaviors that are out of character — not only to better protect against security risks, but also to better serve internal stakeholders.</p>
<p>There’s an emerging discipline in security focused around user behavior analytics that is showing promise in helping to address the threat from outside, while also providing insights needed to solve the people problem. It puts to use new technologies that leverage a combination of big data and machine learning, allowing security teams to get to know their employees better and to quickly identify when things may be happening that are out of the norm.</p>
<p>To start, behavioral and contextual data points such as the typical location of an employee’s IP address, the time of day they usually log into the networks, the use of multiple machines/IP addresses, the files and information they typically access, and more can be compiled and monitored to establish a profile of common behaviors. For example, if an employee in the HR team is suddenly trying to access engineering databases hundreds of times per minute, it can be quickly flagged to the security team to prevent an incident.</p>
<p>The real value here is when companies apply these learnings to build a risk-based authentication system to give staff access to data or systems. Essentially, it means customizing the level of access given to employees based on a risk score of their past behaviors, compared with an understanding of the data or systems they are asking for access to. This type of risk-based authentication enables better visibility of error-prone users, or those that have opened avenues of opportunity for cybercriminals in the past, helping to solve the “human” problem of cybersecurity.</p>
<p>In order to achieve this, we must first understand that all users are not on the same playing field. There are many different levels of “savvy” when it comes to each individual within a company: some are extremely knowledgeable about technology and implementing safeguards, such as biometrics and multi-factor authentication. Others may not be as careful or may do things such as recycle passwords (shudder) from common accounts, which can easily be leaked or breached, or downloading documents from a suspicious email address.</p>
<p>In addition, there are many varying roles and needs throughout every company, from users with basic computing environments, to those that may need up to five different machines to do their jobs. For these reasons, there is no “one size fits all” approach to navigating the human element of security, and organizations can no longer rely on traditional automated technologies that take a “set it and forget it” mentality.</p>
<p>While it may go against traditional instincts around security, which usually centers on control and restrictions to fight human error, it’s best to let employees just be themselves — and design your systems to cope with that. The combination of understanding employees’ everyday interactions with IT and having the power to analyze every possible scenario in real time can help security teams define more appropriate levels of authentication for the entire workforce, from tech-savvy developers to the Luddites in the boardroom. This provides the right balance of security, privacy, and user experience, while protecting organizations—and the people within them—from themselves.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/">How Machine Learning Can Help Identify Cyber Vulnerabilities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-machine-learning-can-help-identify-cyber-vulnerabilities/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>What happens when cybercriminals start to use machine learning?</title>
		<link>https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 21 Oct 2017 06:15:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1517</guid>

					<description><![CDATA[<p>Source &#8211; computerworlduk.com Over the last few years, machine learning threat detection and defence company Darktrace has been something of a rising star in the cybersecurity industry. Its <a class="read-more-link" href="https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/">What happens when cybercriminals start to use machine learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>computerworlduk.com</strong></p>
<p>Over the last few years, machine learning threat detection and defence company Darktrace has been something of a rising star in the cybersecurity industry. Its core unsupervised machine learning technology lend it the reputation of being one of the best in AI-enabled security. But what exactly do those on the cutting edge of cybersecurity research worry about?</p>
<p>Computerworld UK met with director of cyber analysis at Darktrace, Andrew Tsonchev, at the IP Expo show in London&#8217;s Docklands late last month.</p>
<p>&#8220;A lot of solutions out there look at previous attacks and try to learn from them, so AI and machine learning are being built around learning from what they&#8217;ve seen before,&#8221; he said. &#8220;That&#8217;s quite effective at, say, coming up with a machine learning classifier that can detect banking trojans.&#8221;</p>
<p>But what&#8217;s the flip-side to that? If vendors are taking artificial intelligence seriously in threat detection, won&#8217;t their counterparts in the criminal world consider the same? Are these hackers as sophisticated currently as some of the vendors would have us believe they are?</p>
<p>To understand where machine learning might be useful for attackers, it&#8217;s useful to consider some instances where it has demonstrated strong advantages in defence.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/">What happens when cybercriminals start to use machine learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-happens-when-cybercriminals-start-to-use-machine-learning/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence cyber attacks are coming – but what does that mean?</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Aug 2017 10:53:47 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Computer hacking]]></category>
		<category><![CDATA[cyber attacks]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Robots]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=828</guid>

					<description><![CDATA[<p>Source &#8211; theconversation.com The next major cyberattack could involve artificial intelligence systems. It could even happen soon: At a recent cybersecurity conference, 62 industry professionals, out of the 100 questioned, <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/">Artificial intelligence cyber attacks are coming – but what does that mean?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>theconversation.com</strong></p>
<p>The next major cyberattack could involve artificial intelligence systems. <a href="https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/"><span id="more-828"></span></a>It could even happen soon: At a recent cybersecurity conference, 62 industry professionals, out of the 100 questioned, said they thought the first AI-enhanced cyberattack could come in the next 12 months.</p>
<p>This doesn’t mean robots will be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts – things like identity theft, denial-of-service attacks and password cracking – more powerful and more efficient. This is dangerous enough – this type of hacking can steal money, cause emotional harm and even injure or kill people. Larger attacks can cut power to hundreds of thousands of people, shut down hospitals and even affect national security.</p>
<p>As a scholar who has studied AI decision-making, I can tell you that interpreting human actions is still difficult for AI’s and that humans don’t really trust AI systems to make major decisions. So, unlike in the movies, the capabilities AI could bring to cyberattacks – and cyberdefense – are not likely to immediately involve computers choosing targets and attacking them on their own. People will still have to create attack AI systems, and launch them at particular targets. But nevertheless, adding AI to today’s cybercrime and cybersecurity world will escalate what is already a rapidly changing arms race between attackers and defenders.</p>
<h2>Faster attacks</h2>
<p>Beyond computers’ lack of need for food and sleep – needs that limit human hackers’ efforts, even when they work in teams – automation can make complex attacks much faster and more effective.</p>
<p>To date, the effects of automation have been limited. Very rudimentary AI-like capabilities have for decades given virus programs the ability to self-replicate, spreading from computer to computer without specific human instructions. In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for example, involve triggering a remote program on several computers or devices to overwhelm servers. The attack that shut down large sections of the internet in October 2016 used this type of approach. In some cases, common attacks are made available as a script that allows an unsophisticated user to choose a target and launch an attack against it.</p>
<p>AI, however, could help human cybercriminals customize attacks. Spearphishing attacks, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time – if detected at all – due to their more limited impact.</p>
<p>AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing out of their account until long after the thief has gotten away.</p>
<h2>Improved adaptation</h2>
<p>AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may be able to exploit another vulnerability, or start scanning for new ways into the system – without waiting for human instructions.</p>
<p>This could mean that human responders and defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a programming and technological arms race, with defenders developing AI assistants to identify and protect against attacks – or perhaps even AI’s with retaliatory attack capabilities.</p>
<h2>Avoiding the dangers</h2>
<p>Operating autonomously could lead AI systems to attack a system it shouldn’t, or cause unexpected damage. For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for unmanned aerial vehicles to operate autonomously has raised similar questions of the need for humans to make the decisions about targets.</p>
<p>The consequences and implications are significant, but most people won’t notice a big change when the first AI attack is unleashed. For most of those affected, the outcome will be the same as human-triggered attacks. But as we continue to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by artificial intelligence only grows.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/">Artificial intelligence cyber attacks are coming – but what does that mean?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
