<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>U.S. Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/u-s/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/u-s/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 27 Jul 2019 17:21:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>What’s Wrong with Deep Learning?</title>
		<link>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 27 Jul 2019 17:21:14 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI technique]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Naval Research Laboratory]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[U.S.]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4165</guid>

					<description><![CDATA[<p>Source: machinedesign.com Artificial Intelligence (AI) gets plenty of attention these days, but one researcher at the U.S. Naval Research Laboratory believes one particular AI technique might be <a class="read-more-link" href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">What’s Wrong with Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: machinedesign.com</p>



<p>Artificial Intelligence (AI) gets plenty of attention these days, but one researcher at the U.S. Naval Research Laboratory believes one particular AI technique might be getting a little too much.</p>



<p>“People have focused on an area of machine learning—deep learning (aka deep networks) — and less so on the variety of other artificial intelligence techniques,” says Ranjeev Mittu, head of  NRL’s Information Management and Decision Architectures Branch. He has been working on AI for more than 20 years. “The biggest limitation of deep networks is that we still lack a complete understanding of how these networks arrive at solutions.”</p>



<p> Deep learning is a machine learning technique that can recognize patterns, such as identifying a collection of pixels as an image of a dog. The technique involves layering neurons together, with each layer devoted to learning a different level of abstraction. </p>



<p>In the dog image example, the lower layers of the neural network learn primitive details such as pixel values.&nbsp;The next set attempts to learn edges; higher layers learn a combination of edges such as those that form a nose. With enough layers, these networks can recognize images nearly as well as humans.</p>



<p>But deep learning systems can be fooled easily just by changing a small number of pixels, according to Mittu. “You can have adversarial ‘attacks’ where once you’ve created a model that recognizes dogs by showing it millions of pictures of dogs, but making changes to a small number of pixels, the network may misclassify an image as a rabbit, for example.”</p>



<p>The biggest flaw in this machine learning technique, according to Mittu, is that there is a large amount of art to building these networks, which means there are few scientific methods to help understand when they will fail.</p>



<p>“Although deep learning has been highly successful, it is also currently limited because there is little visibility into its decision rationale. Until we truly reach a point where this technique becomes fully “explainable”, it cannot inform humans as to how it arrives at a solution, or why it failed. We have to realize that deep networks are just one tool in the AI tool box.”</p>



<p>He stresses that humans have to stay in the loop. “Imagine you have an automated threat-detection system on the bridge of your ship and it picks up a small object on the horizon,” Mittu says. “The deep network classification may indicate it is a fast attack craft coming at you, but you know a small set of uncertain pixels can mislead the algorithm. Do you believe it?</p>



<p>“A human will have to examine it further,” he continues. “There may always need to be a human in the loop for high-risk situations. There could be a high degree of uncertainty, and the challenge is to increase the classification accuracy while keeping the false alarm rate low. It is sometimes difficult to strike the perfect balance. ”</p>



<p>When it comes to machine learning, the key factor, simply put, is data.</p>



<p>Consider one of Mittu’s previous projects: analyzing commercial shipping vessel movements around the world. The goal was to have machine learning discern patterns in vessel traffic to identify ships involved in illicit activities. It proved a difficult problem to model and understand.</p>



<p>“We cannot have a global model because the behaviors differ for vessel classes, owners, and other characteristics.” he explains. “It is even different seasonally, because of sea state and weather patterns.”</p>



<p>But the bigger problem, Mittu found, was the possibility of mistakenly using poor-quality data.</p>



<p>“Ships transmit their location and other information, just like aircraft. But what they transmit can be spoofed,” Mittu said. “You don’t know if it is good or bad information. It is like changing those few pixels on the dog image that causes the system to fail.”</p>



<p>Missing data is another issue. Imagine a case in which you must move large numbers of people and materials on a regular basis to sustain military operations, and you’re relying on incomplete data to predict how you might act more efficiently.</p>



<p>“The difficulty comes when you start to train machine learning algorithms on poor quality data,” Mittu says. “Machine learning becomes unreliable at some point, and operators will not trust the algorithms’ outcomes.”</p>



<p>Mittu’s team continues to pursue AI innovation,s and they advocate an interdisciplinary approach to employing AI systems to solve complex problems.</p>



<p>“There are many ways to improve predictive capabilities, but probably the best-of-breed will take a holistic approach and employ several AI techniques and strategically include the human decision-maker,” he says.</p>



<p>“Aggregating various techniques (similar to ‘boosting’), which may ‘weight’ algorithms differently, could provide a better answer. By employing combinations of AI techniques, the resulting system may also be more robust to poor data quality.”</p>



<p>One area Mittu is excited about is recommender systems. He says most people are familiar with these systems, which are used in search engines and entertainment applications such as Netflix.</p>



<p>“Think of a military command-and-control system where users need good information to make good decisions,” he says. “By looking at what the user is doing in the system within some context, can we anticipate what the user might do next and infer what data they might need?”</p>



<p>Although the field of AI offers almost limitless potential for innovative solutions to today’s problems, Mittu notes that researchers obviously have many years of work ahead of them.</p>
<p>The post <a href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">What’s Wrong with Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google is making a ‘mistake’ with its AI choices, former U.S. officials say</title>
		<link>https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/</link>
					<comments>https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Jul 2019 13:29:15 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI choices]]></category>
		<category><![CDATA[criticizing]]></category>
		<category><![CDATA[former]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[intelligence services]]></category>
		<category><![CDATA[national security]]></category>
		<category><![CDATA[U.S.]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4121</guid>

					<description><![CDATA[<p>Source: fedscoop.com Two former high-ranking national security officials are criticizing Google for developing artificial intelligence in China while backing out of working with the Department of Defense, a move they <a class="read-more-link" href="https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/">Google is making a ‘mistake’ with its AI choices, former U.S. officials say</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: fedscoop.com</p>



<p>Two former high-ranking national security officials are criticizing Google for developing artificial intelligence in China while backing out of working with the Department of Defense, a move they said could benefit Chinese military and intelligence services.</p>



<p>Ash Carter, secretary of Defense under President Barack Obama, said Thursday that Google could be helping the Chinese military without knowing it. And the tech giant’s refusal to continue working with the Department of Defense on its AI development was a “mistake,” Carter said on CNBC.</p>



<p>The recent round of criticism comes after Peter Thiel, a Silicon Valley investor and supporter of President Trump, accused Google of acting “treasonous” by working on AI in China. Trump echoed Thiel’s concerns after referencing a “Fox &amp; Friends” segment by tweeting that his administration would “take a look.”</p>



<p>Richard Clarke, who was a top counterterrorism and cybersecurity aide in the Clinton and George W. Bush administrations, also backed Thiel’s criticism of Google.</p>



<p>“If you turn around and you work on artificial intelligence in China, and you don’t really know what they’re going to do with that, I think there’s an issue,” Clarke said on CNBC Wednesday.</p>



<p>A Google spokesperson flatly denied the company works with the Chinese military.</p>



<p>“We are not working with the Chinese military. We are working with the U.S. government, including the Department of Defense, in many areas including cybersecurity, recruiting and healthcare,” the spokesperson said in an emailed statement.</p>



<p>Trump’s nominee for secretary of Defense, Mark Esper, also stressed during his confirmation hearing last week that AI development would be a priority for him.</p>



<p>“Different people put different things as No. 1. For me it is AI,” Esper said. “It will likely change the character of warfare.”</p>



<p>While Google’s search function is banned in China, it announced the opening of an AI center in 2017. The company has long-courted China and taken criticism for its pursuit of the more than billion-person market. Recently, a top Google executive confirmed at a Senate hearing the company was not working a censored version of its search function for use in China.</p>



<p>Last year Google employees protested the company’s involvement with the Defense Department’s Project Maven, citing ethical issues. The project aims to help Air Force analysts make better use of full-motion video surveillance by deploying AI and machine learning in the place of human eyeballs.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/">Google is making a ‘mistake’ with its AI choices, former U.S. officials say</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-is-making-a-mistake-with-its-ai-choices-former-u-s-officials-say/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
