<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>cyberattack Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/cyberattack/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/cyberattack/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 31 Jul 2020 05:32:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Fooling deep neural networks for object detection with adversarial 3-D logos</title>
		<link>https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/</link>
					<comments>https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 31 Jul 2020 05:32:50 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[3-D adversarial logo]]></category>
		<category><![CDATA[cyberattack]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[deep neural networks]]></category>
		<category><![CDATA[Fooling]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[synthesize images]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10610</guid>

					<description><![CDATA[<p>Source: techxplore.com Over the past decade, researchers have developed a growing number of deep neural networks that can be trained to complete a variety of tasks, including recognizing people or objects in images. While many of these computational techniques have achieved remarkable results, they can sometimes be fooled into misclassifying data. An adversarial attack is <a class="read-more-link" href="https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/">Fooling deep neural networks for object detection with adversarial 3-D logos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techxplore.com</p>



<p>Over the past decade, researchers have developed a growing number of deep neural networks that can be trained to complete a variety of tasks, including recognizing people or objects in images. While many of these computational techniques have achieved remarkable results, they can sometimes be fooled into misclassifying data.</p>



<p>An adversarial attack is a type of cyberattack that specifically targets deep neural networks, tricking them into misclassifying data. It does this by creating adversarial data that closely resembles and yet differs from the data typically analyzed by a deep neural network, prompting the network to make incorrect predictions, failing to recognize the slight differences between real and adversarial data.</p>



<p>In recent years, this type of attack has become increasingly common, highlighting the vulnerabilities and flaws of many deep neural networks. A specific type of adversarial attack that has emerged in recent years entails the addition of adversarial patches (e.g., logos) to images. This attack has so far primarily targeted models that are trained to detect objects or people in 2-D images.</p>



<p>Researchers at Texas A&amp;M University, University of Texas at Austin, University of Science and Technology in China, and the MIT-IBM Watson AI Lab have recently introduced a new attack that entails the addition of 3-D adversarial logos to images with the aim of tricking deep neural networks for object detection. This attack, presented in a paper pre-published on arXiv, could be more applicable to real-world situations, as most real data processed by deep neural networks is in 3-D.</p>



<p>&#8220;The primary aim of this work is to generate a structured patch in an arbitrary shape (called a &#8216;logo&#8217; by us), termed as a 3-D adversarial logo that, when appended to a 3-D human mesh and then rendered into 2-D images, can consistently fool the object detector under different human postures,&#8221; the researchers wrote in their paper.</p>



<p>Essentially, the researchers created an arbitrary shape logo based on a pre-existing 2-D texture image. Subsequently, they mapped this image onto a 3-D adversarial logo, employing a texture-mapping method known as logo transformation. The 3-D adversarial logo they crafted could then serve as an adversarial texture, allowing the attacker to easily manipulate its shape and position.</p>



<p>In contrast with previously introduced attacks that utilize adversarial patches, this new type of attack maps logos in 3-D, yet it derives its shapes from 2-D images. As a result, it enables the creation of versatile adversarial logos that are can trick a broad variety of object or person detection methods, including those used in real-world situations, such as techniques for identifying people in CCTV footage.</p>



<p>&#8220;We render 3-D meshes with the 3-D adversarial logo attached into 2-D scenarios and synthesize images that could fool the detector,&#8221; the researchers wrote in their paper. &#8220;The shape of our 3-D adversarial logo comes from the selected logo texture in the 2-D domain. Hence, we can perform versatile adversarial training with shape and position controlled.&#8221;</p>



<p>The researchers tested the success rate of their adversarial logo attack by implementing it on two state-of-the-art deep neural network-based object detectors, known as YOLOv2 and YOLOv3. In these evaluations, the 3-D adversarial logo fooled both detectors robustly, causing them to misclassify images taken from a variety of angles and in which humans were in different postures.</p>



<p>These results confirm the vulnerabilities of deep neural network-based techniques for detecting objects or humans in images. They thus further highlight the need to develop deep learning methods that are better at spotting adversarial images or logos and that are harder to fool using synthesized data.</p>
<p>The post <a href="https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/">Fooling deep neural networks for object detection with adversarial 3-D logos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fooling-deep-neural-networks-for-object-detection-with-adversarial-3-d-logos/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning capabilities aid healthcare cybersecurity</title>
		<link>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 28 Dec 2017 05:31:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cyberattack]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[security tools]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1926</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com Why is now the time for healthcare organizations to consider applying machine learning capabilities to cybersecurity? Matt Mellen: Healthcare has seen more than its fair share of cyberattacks for a variety of reasons and it urgently needed a game-changing security technology to prevent them. I think machine learning is that game changer, and it&#8217;s going <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Machine learning capabilities aid healthcare cybersecurity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>techtarget.com</strong></p>
<p><b>Why is now the time for healthcare organizations to consider applying machine learning capabilities to cybersecurity?</b></p>
<p>Matt Mellen: Healthcare has seen more than its fair share of cyberattacks for a variety of reasons and it urgently needed a game-changing security technology to prevent them. I think machine learning is that game changer, and it&#8217;s going to have a pretty significant impact on [the ability of healthcare organizations] to protect themselves from cyberattacks, cyberbreaches, at the same time improving healthcare practitioners&#8217; ability to provide highly accurate diagnoses. The key in making machine learning algorithms that work properly is having a lot of data to feed into the algorithm. The more data, the better; the more data, the more accurate the machine learning algorithm result.</p>
<p>In healthcare, I know that hospital networks are building massive data lakes to store all their health information with the intent on having it evaluated by machine learning algorithms and hopefully result in the ability to provide better diagnosis. But in cybersecurity the winners are going to be &#8212; and by winners, I mean the security tools that will be the most effective &#8212; those that will have a significant amount of threat data to feed into their machine learning algorithms.</p>
<p>Machine learning is clearly going to have a growing impact on the effectiveness of cyberattack prevention and beyond just medical diagnosesto other areas of the field like predictive analytics, which is predicting outcomes before they happen, using natural language processing to extract meaning out of images, which is a real challenge in healthcare, because, for example, radiology images are not easily searched or digested by software.</p>
<p>My recommendation is for CISOs of healthcare organizations to start planning to adopt machine learning capabilities in their cybersecurity programs, and to specifically look for security products that have machine learning based on large data sets and ensure that they have consistent cyberattack coverage across the end points, the network and in the cloud.</p>
<p><b>What kind of investment will healthcare organizations have to make to apply machine learning capabilities to their cybersecurity programs?</b></p>
<p>Mellen: It really depends on the size of the organization. &#8230; But what I typically recommend is focusing on a phased approach to most problems. A lot of healthcare organizations first focus on the edge, protect the edge of their network, figure out the ingress and egress points to their network and protect those first. And you can do that with a next-generation firewall. &#8230; It does not require a significant amount of change to the environment.</p>
<p><b>Do you see cyberattacks continuing to be a threat in 2018?</b></p>
<p>Mellen: Ransomware is definitely going to continue given that it is the most effective and quickest way for attackers to monetize their efforts and not get caught. If you end up exfiltrating or stealing protected health information out of healthcare organizations, you have to figure out how to sell it. And when you do that and you go into the dark web to sell it, there&#8217;s a higher risk of getting caught by the authorities. Hence, most attackers continue to just widely use ransomware to make money and not get caught.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Machine learning capabilities aid healthcare cybersecurity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
