<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI technique Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ai-technique/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ai-technique/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 27 Jul 2019 17:21:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>What’s Wrong with Deep Learning?</title>
		<link>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 27 Jul 2019 17:21:14 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI technique]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Naval Research Laboratory]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[U.S.]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4165</guid>

					<description><![CDATA[<p>Source: machinedesign.com Artificial Intelligence (AI) gets plenty of attention these days, but one researcher at the U.S. Naval Research Laboratory believes one particular AI technique might be <a class="read-more-link" href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">What’s Wrong with Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: machinedesign.com</p>



<p>Artificial Intelligence (AI) gets plenty of attention these days, but one researcher at the U.S. Naval Research Laboratory believes one particular AI technique might be getting a little too much.</p>



<p>“People have focused on an area of machine learning—deep learning (aka deep networks) — and less so on the variety of other artificial intelligence techniques,” says Ranjeev Mittu, head of  NRL’s Information Management and Decision Architectures Branch. He has been working on AI for more than 20 years. “The biggest limitation of deep networks is that we still lack a complete understanding of how these networks arrive at solutions.”</p>



<p> Deep learning is a machine learning technique that can recognize patterns, such as identifying a collection of pixels as an image of a dog. The technique involves layering neurons together, with each layer devoted to learning a different level of abstraction. </p>



<p>In the dog image example, the lower layers of the neural network learn primitive details such as pixel values.&nbsp;The next set attempts to learn edges; higher layers learn a combination of edges such as those that form a nose. With enough layers, these networks can recognize images nearly as well as humans.</p>



<p>But deep learning systems can be fooled easily just by changing a small number of pixels, according to Mittu. “You can have adversarial ‘attacks’ where once you’ve created a model that recognizes dogs by showing it millions of pictures of dogs, but making changes to a small number of pixels, the network may misclassify an image as a rabbit, for example.”</p>



<p>The biggest flaw in this machine learning technique, according to Mittu, is that there is a large amount of art to building these networks, which means there are few scientific methods to help understand when they will fail.</p>



<p>“Although deep learning has been highly successful, it is also currently limited because there is little visibility into its decision rationale. Until we truly reach a point where this technique becomes fully “explainable”, it cannot inform humans as to how it arrives at a solution, or why it failed. We have to realize that deep networks are just one tool in the AI tool box.”</p>



<p>He stresses that humans have to stay in the loop. “Imagine you have an automated threat-detection system on the bridge of your ship and it picks up a small object on the horizon,” Mittu says. “The deep network classification may indicate it is a fast attack craft coming at you, but you know a small set of uncertain pixels can mislead the algorithm. Do you believe it?</p>



<p>“A human will have to examine it further,” he continues. “There may always need to be a human in the loop for high-risk situations. There could be a high degree of uncertainty, and the challenge is to increase the classification accuracy while keeping the false alarm rate low. It is sometimes difficult to strike the perfect balance. ”</p>



<p>When it comes to machine learning, the key factor, simply put, is data.</p>



<p>Consider one of Mittu’s previous projects: analyzing commercial shipping vessel movements around the world. The goal was to have machine learning discern patterns in vessel traffic to identify ships involved in illicit activities. It proved a difficult problem to model and understand.</p>



<p>“We cannot have a global model because the behaviors differ for vessel classes, owners, and other characteristics.” he explains. “It is even different seasonally, because of sea state and weather patterns.”</p>



<p>But the bigger problem, Mittu found, was the possibility of mistakenly using poor-quality data.</p>



<p>“Ships transmit their location and other information, just like aircraft. But what they transmit can be spoofed,” Mittu said. “You don’t know if it is good or bad information. It is like changing those few pixels on the dog image that causes the system to fail.”</p>



<p>Missing data is another issue. Imagine a case in which you must move large numbers of people and materials on a regular basis to sustain military operations, and you’re relying on incomplete data to predict how you might act more efficiently.</p>



<p>“The difficulty comes when you start to train machine learning algorithms on poor quality data,” Mittu says. “Machine learning becomes unreliable at some point, and operators will not trust the algorithms’ outcomes.”</p>



<p>Mittu’s team continues to pursue AI innovation,s and they advocate an interdisciplinary approach to employing AI systems to solve complex problems.</p>



<p>“There are many ways to improve predictive capabilities, but probably the best-of-breed will take a holistic approach and employ several AI techniques and strategically include the human decision-maker,” he says.</p>



<p>“Aggregating various techniques (similar to ‘boosting’), which may ‘weight’ algorithms differently, could provide a better answer. By employing combinations of AI techniques, the resulting system may also be more robust to poor data quality.”</p>



<p>One area Mittu is excited about is recommender systems. He says most people are familiar with these systems, which are used in search engines and entertainment applications such as Netflix.</p>



<p>“Think of a military command-and-control system where users need good information to make good decisions,” he says. “By looking at what the user is doing in the system within some context, can we anticipate what the user might do next and infer what data they might need?”</p>



<p>Although the field of AI offers almost limitless potential for innovative solutions to today’s problems, Mittu notes that researchers obviously have many years of work ahead of them.</p>
<p>The post <a href="https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/">What’s Wrong with Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/whats-wrong-with-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Is Learning How To Develop Games</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 14 Sep 2017 07:24:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI technique]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[game development]]></category>
		<category><![CDATA[Online game]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1115</guid>

					<description><![CDATA[<p>Source &#8211; rollingstone.com Researchers at Georgia Institute of Technology are developing an AI that can recreate a game engine simply by watching gameplay. This technology, as detailed in <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/">Artificial Intelligence Is Learning How To Develop Games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>rollingstone.com</strong></p>
<p>Researchers at Georgia Institute of Technology are developing an AI that can recreate a game engine simply by watching gameplay.</p>
<p>This technology, as detailed in a press release, is being created in an effort to aid video game developers to &#8220;speed up game development and experiment with different styles of play.&#8221; During their most recent experiments, the AI watched two minutes of <i>Super Mario Bros.</i>gameplay, and then built its own version of the game by studying and frames and predicting future events.</p>
<p>&#8220;To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single &#8216;speedrunner&#8217; video, where a player heads straight for the goal,&#8221; Georgia Institute&#8217;s communications officer Joshua Preston explained. This school of thought, he added, made the most difficult scenario possible for training the AI.</p>
<p>By allowing the AI to study the actual frames of the game, researchers found it was able to predict frames of the game much closer to the actual frames of <i>Super Mario Bros. </i>than other tests the team had run with different methods. This simplifies the process, necessitating their AI only need to watch a video of a game in action to begin replicating a game and learning its engine.</p>
<p>&#8220;Our AI creates the predictive model without ever accessing the game’s code, and makes significantly more accurate future event predictions than those of convolutional neural networks,” lead researcher Matthew Guzdial said in the release. “A single video won’t produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that’s pretty close.”</p>
<p>Once the team had their model, there was only one test left: how did it play? A second AI system was then implemented to test the recreated level to ensure the player wouldn&#8217;t fall through a level – kind of like a QA tester, but instead a highly intricate AI system.</p>
<p>The researchers found &#8220;the AI playing with the cloned engine proved indistinguishable compared to an AI playing the original game engine.&#8221;</p>
<p>&#8220;To our knowledge this represents the first AI technique to learn a game engine and simulate a game world with gameplay footage,&#8221; associate professor of Interactive Computing and co-investigator on the project Mark Riedl said.</p>
<p>The researchers go on to stress that, as of right now, their AI systems work best when the majority of the action happens on screen. Games where action happens away from the player&#8217;s direct frame of sight might prove difficult for the system.</p>
<p>The nascent technology does raise the question of what sort of impact a more realized version of the AI could have on the game industry. Specifically, could it eliminate the need for certain jobs, like QA tester, in the game industry?</p>
<div class="article-content">
<p>However, Georgia Tech&#8217;s Riedl says developers don&#8217;t need to fear their job security; this technology will be an aid in development, not a replacement. Riedl tells Glixel that this AI will help novice game developers create projects once out of their reach. Using this kind of AI would allow developers with no coding or design experience to show the AI how a game should work, which it would then replicate.</p>
<p>&#8220;Instead of putting people out of work, this will make it possible for people to create games that were otherwise unable to do so,&#8221; Riedl said. &#8220;That makes it possible for more people to create – increasing the size of the pie instead of supplanting individuals. Second, professionals may be able to build games faster by having the system make an initial guess about the mechanics. Working more efficiently doesn’t necessarily put people out of work, but does allow them to make bigger and better games in the time available.&#8221;</p>
<p>What about QA testers? Well, according to Riedl, they&#8217;ll still be necessary thanks to one feature they have over AI systems necessary for playing games: the human touch.</p>
<p>&#8220;[Video games] are made to be enjoyed by humans,&#8221; Riedl said. &#8220;Because of that you&#8217;re always going to need humans to actually test the games. AI might help to test things we simply can&#8217;t test currently but can be formalized mathematically, like game balance &#8230; but one will need to use humans to see if other humans will enjoy the game for the foreseeable future.&#8221;</p>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/">Artificial Intelligence Is Learning How To Develop Games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-how-to-develop-games/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
