<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>hackers Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/hackers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/hackers/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 02 Mar 2020 05:30:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Deep Learning Used to Trick Hackers</title>
		<link>https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 02 Mar 2020 05:30:01 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7150</guid>

					<description><![CDATA[<p>Source: unite.ai A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in. The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/">Deep Learning Used to Trick Hackers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: unite.ai</p>



<p>A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in.</p>



<p>The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The computer is then trained with the information in order to recognize and stop future attacks.</p>



<p>The UT Dallas researchers presented their paper titled “Improving Intrusion Detectors by Crook-Sourcing,” at the annual Computer Security Applications Conference in December in Puerto Rico. The group also presented “Automating Cyberdeception Evaluation with Deep Learning” at the Hawaii International Conference of System Sciences in January.</p>



<p>DEEP-Dig is part of an increasingly popular cybersecurity field called deception technology. As evident by the name, this field relies on traps that are set for hackers. Researchers are hoping that this will be able to be used effectively for defense organizations.&nbsp;</p>



<p>Dr. Kevin Hamlen is a Eugene McDermott Professor of computer science.</p>



<p>“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” he said. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”</p>



<p>This new approach is being used to solve some of the major problems associated with the use of artificial intelligence (AI) for cybersecurity. One of those problems is that there is a shortage of data needed to train computers to detect hackers, and this is caused by privacy concerns. According to Gbadebo Ayoade MS’14, PhD’19, better data means a better ability to detect attacks. Ayoade presented the findings at the conferences, and he is now a data scientist at Procter &amp; Gamble Co.</p>



<p>“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”</p>



<p>The most common method used by hackers is to begin with simpler tricks and progressively get more sophisticated, according to Hamlen. Most of the cyber defense programs being used today attempt to disrupt the intruders immediately, so the intruders’ techniques are never learned. DEEP-Dig attempts to solve this by pushing the hackers into a decoy site full of disinformation so that the techniques can be observed. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to the hackers.</p>



<p>“Attackers will feel they’re successful,” Khan said.</p>



<p>Cyberattacks are a major concern for governmental agencies, businesses, nonprofits, and individuals. According to a report to the White House from the Council of Economic Advisers, the attacks cost the U.S. economy more than $57 billion in 2016.</p>



<p>DEEP-Dig could play a major role in evolving defense tactics at the same time hacking techniques evolve. The intruders could disrupt the method if they realize they have entered into a decoy site, but Hamlen is not overly concerned.&nbsp;</p>



<p>“So far, we’ve found this doesn’t work. When an attacker tries to play along, the defense system just learns how hackers try to hide their tracks,” Hamlen said. “It’s an all-win situation — for us, that is.”</p>



<p>Other researchers involved in the work include Frederico Araujo PhD’16, research scientist at IBM’s Thomas J. Watson Research Center; Khaled Al-Naami PhD’17; Yang Gao, a UT Dallas computer science graduate student; and Dr. Ahmad Mustafa of Jordan University of Science and Technology.</p>



<p>The research was partly supported by the Office of Naval Research, the National Security Agency, the National Science Foundation, and the Air Force Office of Scientific Research.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/">Deep Learning Used to Trick Hackers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-used-to-trick-hackers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hackers Attacked LiveRamp &#8211; A Big Data Partner of Facebook For A Bigger Advertising Scam</title>
		<link>https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/</link>
					<comments>https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 03 Feb 2020 06:50:39 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[apps]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data partner]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6479</guid>

					<description><![CDATA[<p>Source: digitalinformationworld.com As soon as hackers take down your account, you normally get to see suspicious posts that might revolve around deals on products or stuff that you would never like buying online. But how about a situation where hackers plan to infiltrate the account of Facebook’s biggest data partners? Yes, we are going to <a class="read-more-link" href="https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/">Hackers Attacked LiveRamp &#8211; A Big Data Partner of Facebook For A Bigger Advertising Scam</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: digitalinformationworld.com</p>



<p>

As soon as hackers take down your account, you normally get to see suspicious posts that might revolve around deals on products or stuff that you would never like buying online. But how about a situation where hackers plan to infiltrate the account of Facebook’s biggest data partners? Yes, we are going to talk about thousands of dollars and credit cards being stolen in a similar case.</p>



<p>Recently, hackers got access to the personal account of LiveRamp’s employee, only with the aim to get control over the Business Manager’s account and hoping to run scam through the ads with other’s money being spent on them.</p>



<p>By doing so, they successfully attacked one of Facebook’s most prominent data partners, however, the damage was still contained. The incident affected a limited number of LiveRamp customers and associated Ad Accounts, while Facebook actively informed the affected parties about it.</p>



<p>Although LiveRamp didn’t tell the exact number of customers who got affected by the hack and stated that the company has their security measures in place, especially for employees who deal with Facebook ads accounts, but there is one thing for sure that thousands of victim’s dollars were spent into tricking users buy fake products. Facebook, on the other hand, did confirm later in November that personal account of an admin for a Business Manager account but didn’t mention LiveRamp directly.</p>



<p>Nevertheless, LiveRamp and Facebook worked together to cut down unauthorized access and restore the functionality back to normal for its users.</p>



<p>This isn’t the first time that hackers targeted the hub of Facebook’s empire &#8211; the advertisers. As advertising has been Facebook&#8217;s lifeline for a long period of time — considering how it is expected to add up $84 billion in revenue in 2020 with 2.2 billion users, the social media giant is becoming more and more effective with targeted ads. The company is facilitating businesses from around the world in the best way possible and hackers had to pay attention to their success.</p>



<p>Hence, the bad guys knew that they could scam countless people through the tools that marketers use on the social network.</p>



<h2 class="wp-block-heading">Why Was LiveRamp Worth It?</h2>



<p>Besides being a big data partner for Facebook, LiveRamp is a marketing powerhouse that has earned its name for matching data from the real world actions to online identities, helping advertisers more than their expectations. Thus that is also the reason why LiveRamp is favorite of more than 300 businesses and data providers which includes big names like Google, MasterCard, Uber, Snapchat, Spotify and Equifax.<br></p>



<hr class="wp-block-separator"/>



<p>So LiveRamp for Facebook helps advertisers target ads on the basis of data derived from a user’s offline activities and they also integrated Facebook’s Offline Conversions API to help the same advertisers see the effectiveness of their marketing campaigns with knowing how many people actually bought the product.</p>



<p>Liveramp doesn’t run ads on behalf of Facebook itself but it still has access to do so being a Facebook approved partner. Hence, when hackers ran a series of ads on LiveRamp&#8217;s customer accounts on Facebook, one of the ads was viewed more than 60,000 times and further directed users to a page that was made to steal the credit card details of users.</p>



<h2 class="wp-block-heading">Facebook’s Security</h2>



<p>Facebook continuously reminds its users of a number of security tools which primarily includes two-factor authentication and login alerts, just so that one should know if a hacker has tried to intrude. The social network even offers Security Center page for business accounts, along with a recommendation that businesses should go for quarterly security cleanups to make sure that employees don’t have unnecessary access.</p>



<p>However, Facebook only goes with the policy of recommending these security measures and not making it a requirement even for its big partners like LiveRamp which is a big problem actually.</p>



<p>Marcin Kleczynski, CEO of cybersecurity company Malwarebytes raised the concern regarding how Facebook doesn’t require separate Business Manager account and instead users can manage their multi-million dollar pages all through their personal profiles.</p>



<p>He further questioned that why Facebook never opted for higher standards when it comes to bigger partners, especially after knowing how people go for poor security habits including reusing the same password everywhere or not turning on two-factor authentication.</p>



<p>Honestly, till the time Facebook doesn’t make important security measurements a requirement, cybercriminals would have a better chance to have access to million-dollar advertising campaigns all by attacking personal profiles.<br></p>
<p>The post <a href="https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/">Hackers Attacked LiveRamp &#8211; A Big Data Partner of Facebook For A Bigger Advertising Scam</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/hackers-attacked-liveramp-a-big-data-partner-of-facebook-for-a-bigger-advertising-scam/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft to hackers: Please attack Azure</title>
		<link>https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 11 Jun 2019 10:08:47 +0000</pubDate>
				<category><![CDATA[Microsoft Azure Machine Learning]]></category>
		<category><![CDATA[attack]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[Microsoft]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3704</guid>

					<description><![CDATA[<p>Source:- toledoblade.com Microsoft Corp. has what may sound like a counter-intuitive request: Please try to hack into Azure more often. The company isn’t encouraging malicious attacks but it does want security researchers to spend more time poking holes in its flagship cloud service so the company can learn about flaws and fix them. Many so-called White <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/">Microsoft to hackers: Please attack Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- toledoblade.com</p>
<p>Microsoft Corp. has what may sound like a counter-intuitive request: Please try to hack into Azure more often.</p>
<p>The company isn’t encouraging malicious attacks but it does want security researchers to spend more time poking holes in its flagship cloud service so the company can learn about flaws and fix them.</p>
<p>Many so-called White Hat hackers do this for the company’s older products like Windows, Office and browsers, but there aren’t enough working on Azure, said Kymberlee Price, who oversees community programs in Microsoft’s Security Response Center. The company is planning several steps to change that, including explicitly stating it won’t take legal action against researchers and creating a game-like reward system that gives successful bug-finders perks and bragging rights.</p>
<p>Microsoft currently offers bug bounty payments for Azure, but “it’s just not getting as much activity as I would like to see,” Ms. Price added.</p>
<p>It’s an issue Microsoft needs to worry about as it bets big on cloud services for revenue growth. The shift to cloud computing is changing cybersecurity, providing new opportunities and new challenges. One of the biggest risks is that Microsoft now runs services for customers in its cloud, which means the software giant is on the hook to protect them.</p>
<p>Microsoft is planning to release what’s called a Safe Harbor statement giving researchers legal clearance to report a vulnerability. “We’ve always done that but we’ve never formally articulated it,” Ms. Price said. It’s important to publish a formal policy as researchers work more on cloud systems where they may worry they’ll accidentally knock a service offline or access customer data and get in trouble, she said.</p>
<p>In her first stint at Microsoft in the 2000s, Ms. Price was one of the security engineers pioneering the company’s effort to collaborate with security researchers, rather than viewing them as adversaries. She left in 2009 and returned a little more than two years ago.</p>
<p>Right now attackers still target networks located at a company’s own offices more frequently than the cloud, but that’s changing, said Azure Chief Technology Officer Mark Russinovich. “The level of sophistication of the attackers and the interest in [attacking] the cloud just continues to grow as the cloud continues to grow,” he added.</p>
<p>Cloud security requires new tools and techniques but it also enables companies like Microsoft to track and analyze vast amounts of data to find malicious actors and scan networks of hundreds of thousands of customers so it can see attacks materialize.</p>
<p>Mr. Russinovich spoke about protecting the cloud at an academic conference at Microsoft attended by hundreds of Microsoft workers and security engineers from Amazon Web Services, Google, Nike and others. The event grew out of a trail-running group that includes Microsoft’s Ram Shankar Siva Kumar, who oversees a team of engineers who apply machine-learning to cybersecurity, and peers at AWS and Google. The group would often share techniques and research while on the trail and the idea for a formal conference to exchange ideas was born.</p>
<p>The hope is that sharing data, tools and techniques publicly will help everyone better fend off attackers. As long as private customer information is protected, Microsoft wants to share security data, said Steve Dispensa, general manager, cloud and AI security at Microsoft.</p>
<p>“The idea that we’re smarter than the attackers is a malignant myth — they know before we do where the weak spot is,” he said. “We publish data, we all learn, a rising tide lifts all boats.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/">Microsoft to hackers: Please attack Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-to-hackers-please-attack-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>HOW BIG DATA CAN TRANSFORM THE FINANCE INDUSTRY</title>
		<link>https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/</link>
					<comments>https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 21 May 2018 05:45:21 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Big Data Analytics]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[FINANCE INDUSTRY]]></category>
		<category><![CDATA[hackers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2419</guid>

					<description><![CDATA[<p>Source &#8211; bbntimes.com Big data in the finance industry will help overcome major challenges, gaining valuable insights to improve customer satisfaction and overall banking experience. In today&#8217;s digital world, organizations generate and collect data to improve transaction processing. Vast chunks of data get assimilated daily and it is essential to maintain them securely while meeting security <a class="read-more-link" href="https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/">HOW BIG DATA CAN TRANSFORM THE FINANCE INDUSTRY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; bbntimes.com</p>
<p>Big data in the finance industry will help overcome major challenges, gaining valuable insights to improve customer satisfaction and overall banking experience.</p>
<p>In today&#8217;s digital world, organizations generate and collect data to improve transaction processing. Vast chunks of data get assimilated daily and it is essential to maintain them securely while meeting security constraints. Most banks fail to properly exploit the digital assets. It is crucial that the collected data must be put into good use as these bits of data can sometimes contain significant hidden information, which can drive to new opportunities. According to <a href="https://www.ibm.com/blogs/bluemix/2015/11/future-of-cognitive-computing/" target="_blank" rel="nofollow noopener">IBM</a>, over 80% of data is dark data, expected to rise to 93% by 2020. However, every data that is generated and collected can be processed accurately with the use of big data analytics, which aim at analyzing, storing, querying, and updating substantial voluminous amounts of information. Big data in the finance industry has the potential to transform the finance industry by quickly assisting customers.</p>
<h2>Challenges faced by the Finance Industry</h2>
<p>The finance industry faces specific problems with the traditional data management models. One of the most significant challenges faced are the fraudulent activities that are increasingly growing. The conventional model lacked to maintain the security of digital assets, which enabled hackers to achieve vital customer or industry data. Hence, there is a need to overcome this challenge at the earliest, since there is a high risk that the finance industry could burn out. Another major problem faced is the analysis of customers&#8217; sentiments. Most clients represent real wealth for any organization, and every industry must keep fulfilling their ever-increasing demands. The traditional model, however, does not provide any technique to analyze their customers. Moreover, conventional model lacked in segmenting their customers based on their transactions and other internal and external processes. Furthermore, business users cannot target the right audience for their marketing purpose with the traditional model. The list is endless. There is an urgent need for an advanced technology that could help banks overcome these challenges that are impacting their financial results.</p>
<h2>Big data in the Finance Industry</h2>
<div><img fetchpriority="high" decoding="async" src="https://media.licdn.com/dms/image/C4D12AQFEcAM_Ldzr2g/article-inline_image-shrink_1000_1488/0?e=2126476800&amp;v=beta&amp;t=VBXDMUlmEE4x0k6dGXbA24YPiVvJVldI8X2U_8LXERk" alt="" width="819" height="478" data-media-urn="urn:li:digitalmediaAsset:C4D12AQFEcAM_Ldzr2g" data-li-src="https://media.licdn.com/dms/image/C4D12AQFEcAM_Ldzr2g/article-inline_image-shrink_1000_1488/0?e=2126476800&amp;v=beta&amp;t=VBXDMUlmEE4x0k6dGXbA24YPiVvJVldI8X2U_8LXERk" /></div>
<p>As we are aware of the challenges that the finance industry faces with the traditional data management models, it is high time to leverage big data analytics to overcome the biggest issues. A few years back, a <a href="https://www.forbes.com/sites/louiscolumbus/2014/10/19/84-of-enterprises-see-big-data-analytics-changing-their-industries-competitive-landscapes-in-the-next-year/#54379d5a17de" target="_blank" rel="nofollow noopener">survey</a> said that 84% of enterprises see big data analytics changing their industries&#8217; competitive landscapes. This states that big data analytics can transform various industries, including the finance industry. <a href="https://www.allerin.com/blog/10-big-data-technologies-you-must-know" target="_blank" rel="nofollow noopener">Big data analytics</a> help the finance industry to maintain security and privacy of the collected data by the use of predictive analytics.</p>
<p>Predictive analytics help in predicting any fraudulent activity in the future, thereby helping detect hackers. Additionally, predictive analytics could be used to predict a machine breakdown, if any. Furthermore, big data analytics could be used by the finance industry for marketing purpose. Through social media platforms with sentiment analysis researchers can gather sentiments, opinions, and feedbacks of customers that can help them alter a business if itscustomers are unhappy. This will help banks to analyze their customer better and alter their business if necessary.</p>
<p>Furthermore, big data analytics help in customer segmentation, which allows the industry to find the right target audience. Banks can, thereby, boost their marketing skills and make maximum profits if they are able to properly leverage big data analytics. According to a recent <a href="http://www.ingrammicroadvisor.com/data-center/5-big-data-use-cases-in-banking-and-financial-services" target="_blank" rel="nofollow noopener">IBM survey</a>, more than 25% financial institutions are using big data analytics to stay ahead of the curve. Big data in finance has the potential to quickly identify real-time customer sentiments, prevent fraud, improve marketing tactics, and accelerate business growth.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/">HOW BIG DATA CAN TRANSFORM THE FINANCE INDUSTRY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-big-data-can-transform-the-finance-industry/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence and machine learning help hackers steal identities</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Feb 2018 05:28:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI strategies]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2012</guid>

					<description><![CDATA[<p>Source &#8211; born2invest.com Artificial intelligence (AI) and machine learning have been around for many years. A computer or any smart device can work as efficiently as a human being in reading and computing data. Through AI, a device can perceive the environment and take necessary actions to increase chances of success. AI and machine learning have aided <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/">Artificial intelligence and machine learning help hackers steal identities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>born2invest.com</strong></p>
<p>Artificial intelligence (AI) and machine learning have been around for many years. A computer or any smart device can work as efficiently as a human being in reading and computing data. Through AI, a device can perceive the environment and take necessary actions to increase chances of success.</p>
<p>AI and machine learning have aided users in securing big data. For example, many organizations are spending a lot of money to boost their defense systems.</p>
<h4><b>Concern of cybersecurity specialists</b></h4>
<p>Cyber security professionals have found many threats to important information and data in recent years. With the help of AI, they were able to take measures to combat them. However, it is a major concern that hackers themselves are exploiting AI to steal sensitive information.</p>
<p><b>Hackers and attackers</b></p>
<p>Regardless of companies having great AI strategies, hackers and attackers continue to find ways to circumnavigate securities. This goes to say that most of our important data are still in danger of getting compromised.</p>
<p>Hackers are also using the same machine learning and intelligence strategies to get online passwords, credit card numbers, ATM PINs and a lot more. They have also been found to be using AI to develop more complicated and advanced level of threats to our security systems.</p>
<h4><b>Staying ahead of the new risks</b></h4>
<p>No doubt, hackers have a huge amount of creativity that makes them successful in breaching firewalls. Today, cybersecurity professionals should be able to think out of the box and always stay ahead of new possible risks to security systems.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/">Artificial intelligence and machine learning help hackers steal identities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-and-machine-learning-help-hackers-steal-identities/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Hackers Have Already Started to Weaponize Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 12 Sep 2017 06:27:23 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[cyber attacks]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[radical technological]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1080</guid>

					<description><![CDATA[<p>Source &#8211; gizmodo.com Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, <a class="read-more-link" href="https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/">Hackers Have Already Started to Weaponize Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>gizmodo.com</strong></p>
<p>Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.</p>
<p>The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, <em>Forbes</em> staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.</p>
<p>Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.</p>
<p>The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is <em>already</em> being used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.</p>
<p>“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.</p>
<h3>Scales of intelligence</h3>
<p>Marc Goodman, author of <em>Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It</em>, says he isn’t surprised that so many Black Hat attendees see weaponized AI as being imminent, as it’s been part of cyber attacks for years.</p>
<p>“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”</p>
<p>Indeed, it’s likely on this issue where the expert opinions diverge.</p>
<p>The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.</p>
<p>Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/">Hackers Have Already Started to Weaponize Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/hackers-have-already-started-to-weaponize-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Teaching A.I. Systems to Behave Themselves</title>
		<link>https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/</link>
					<comments>https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 14 Aug 2017 07:02:45 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[A.I. Systems]]></category>
		<category><![CDATA[A.I. techniques]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[Photos app]]></category>
		<category><![CDATA[Reinforcement Learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=616</guid>

					<description><![CDATA[<p>Source &#8211; nytimes.com SAN FRANCISCO — At OpenAI, the artificial intelligence lab founded by Tesla’s chief executive, Elon Musk, machines are teaching themselves to behave like humans. But sometimes, this goes wrong. Sitting inside OpenAI’s San Francisco offices on a recent afternoon, the researcher Dario Amodei showed off an autonomous system that taught itself to play Coast Runners, <a class="read-more-link" href="https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/">Teaching A.I. Systems to Behave Themselves</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; nytimes.com</p>
<div class="story-body-supplemental">
<div class="story-body story-body-1">
<p id="story-continues-1" class="story-body-text story-content" data-para-count="197" data-total-count="197">SAN FRANCISCO — At OpenAI, the artificial intelligence lab founded by Tesla’s chief executive, Elon Musk, machines are teaching themselves to behave like humans. But sometimes, this goes wrong.</p>
<p class="story-body-text story-content" data-para-count="282" data-total-count="479">Sitting inside OpenAI’s San Francisco offices on a recent afternoon, the researcher Dario Amodei showed off an autonomous system that taught itself to play Coast Runners, an old boat-racing video game. The winner is the boat with the most points that also crosses the finish line.</p>
<p class="story-body-text story-content" data-para-count="342" data-total-count="821">The result was surprising: The boat was far too interested in the little green widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy. It drove in endless circles, colliding with other vessels, skidding into stone walls and repeatedly catching fire.</p>
<p class="story-body-text story-content" data-para-count="473" data-total-count="1294">Mr. Amodei’s burning boat demonstrated the risks of the A.I. techniques that are rapidly remaking the tech world. Researchers are building machines that can learn tasks largely on their own. This is how Google’s DeepMind lab created a system that could beat the world’s best player at the ancient game of Go. But as these machines train themselves through hours of data analysis, they may also find their way to unexpected, unwanted and perhaps even harmful behavior.</p>
<p id="story-continues-2" class="story-body-text story-content" data-para-count="253" data-total-count="1547">That’s a concern as these techniques move into online services, security devices and robotics. Now, a small community of A.I. researchers, including Mr. Amodei, is beginning to explore mathematical techniques that aim to keep the worst from happening.</p>
</div>
</div>
<div class="story-body-supplemental">
<div class="story-body story-body-2">
<p id="story-continues-4" class="story-body-text story-content" data-para-count="214" data-total-count="1761">At OpenAI, Mr. Amodei and his colleague Paul Christiano are developing algorithms that can not only learn tasks through hours of trial and error, but also receive regular guidance from human teachers along the way.</p>
<p class="story-body-text story-content" data-para-count="315" data-total-count="2076">With a few clicks here and there, the researchers now have a way of showing the autonomous system that it needs to win points in Coast Runners while also moving toward the finish line. They believe that these kinds of algorithms — a blend of human and machine instruction — can help keep automated systems safe.</p>
<p id="story-continues-6" class="story-body-text story-content" data-para-count="395" data-total-count="2471">For years, Mr. Musk, along with other pundits, philosophers and technologists, have warned that machines could spin outside our control and somehow learn malicious behavior their designers didn’t anticipate. At times, these warnings have seemed overblown, given that today’s autonomous car systems can even get tripped up by the most basic tasks, like recognizing a bike lane or a red light.</p>
<p class="story-body-text story-content" data-para-count="173" data-total-count="2644">But researchers like Mr. Amodei are trying to get ahead of the risks. In some ways, what these scientists are doing is a bit like a parent teaching a child right from wrong.</p>
<p class="story-body-text story-content" data-para-count="483" data-total-count="3127">Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.</p>
<p class="story-body-text story-content" data-para-count="53" data-total-count="3180">This video game training has real-world implications.</p>
<p class="story-body-text story-content" data-para-count="448" data-total-count="3628">If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.</p>
<p class="story-body-text story-content" data-para-count="203" data-total-count="3831">All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.</p>
<p id="story-continues-7" class="story-body-text story-content" data-para-count="345" data-total-count="4176">Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.</p>
<p class="story-body-text story-content" data-para-count="222" data-total-count="4398">“This validates a lot of the previous thinking,” said Dylan Hadfield-Menell, a researcher at the University of California, Berkeley. “These types of algorithms hold a lot of promise over the next five to 10 years.”</p>
<p class="story-body-text story-content" data-para-count="320" data-total-count="4718">The field is small, but it is growing. As OpenAI and DeepMind build teams dedicated to A.I. safety, so too is Google’s stateside lab, Google Brain. Meanwhile, researchers at universities like the U.C. Berkeley and Stanford University are working on similar problems, often in collaboration with the big corporate .</p>
<p class="story-body-text story-content" data-para-count="202" data-total-count="5985">That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing you’re someone else.</p>
<p class="story-body-text story-content" data-para-count="248" data-total-count="6233">“If you train an object-recognition system on a million images labeled by humans, you can still create new images where a human and the machine disagree 100 percent of the time,” Mr. Goodfellow said. “We need to understand that phenomenon.”</p>
<p class="story-body-text story-content" data-para-count="316" data-total-count="6549">Another big worry is that A.I. systems will learn to prevent humans from turning them off. If the machine is designed to chase a reward, the thinking goes, it may find that it can chase that reward only if it stays on. This oft-described threat is much further off, but researchers are already working to address it.</p>
<p id="story-continues-10" class="story-body-text story-content" data-para-count="331" data-total-count="6880">Mr. Hadfield-Menell and others at U.C. Berkeley recently published a paper that takes a mathematical approach to the problem. A machine will seek to preserve its off switch, they showed, if it is specifically designed to be uncertain about its reward function. This gives it an incentive to accept or even seek out human oversight.</p>
<p class="story-body-text story-content" data-para-count="203" data-total-count="7083">Much of this work is still theoretical. But given the rapid progress of A.I. techniques and their growing importance across so many industries, researchers believe that starting early is the best policy.</p>
<p class="story-body-text story-content" data-para-count="350" data-total-count="7433" data-node-uid="1">“There’s a lot of uncertainty around exactly how rapid progress in A.I. is going to be,” said Shane Legg, who oversees the A.I. safety work at DeepMind. “The responsible approach is to try to understand different ways in which these technologies can be misused, different ways they can fail and different ways of dealing with these issues.”</p>
<p data-para-count="320" data-total-count="4718">
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/">Teaching A.I. Systems to Behave Themselves</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/teaching-a-i-systems-to-behave-themselves/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
