<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>games Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/games/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/games/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 26 Feb 2021 11:28:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Machine Learning Pwns Old-School Atari Games</title>
		<link>https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 26 Feb 2021 11:28:13 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Atari]]></category>
		<category><![CDATA[games]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Old]]></category>
		<category><![CDATA[Pwns]]></category>
		<category><![CDATA[school]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13118</guid>

					<description><![CDATA[<p>Source &#8211; https://www.scientificamerican.com/ You can call it the ‘revenge of the computer scientist.’ An algorithm that made headlines for mastering the notoriously difficult Atari 2600 game Montezuma’s Revenge, can now beat more games, achieving near perfect scores, and help robots explore real-world environments. Pakinam Amer reports. This is Scientific American’s 60 Second Science. I’m Pakinam <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/">Machine Learning Pwns Old-School Atari Games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.scientificamerican.com/</p>



<p>You can call it the ‘revenge of the computer scientist.’ An algorithm that made headlines for mastering the notoriously difficult Atari 2600 game Montezuma’s Revenge, can now beat more games, achieving near perfect scores, and help robots explore real-world environments. Pakinam Amer reports.</p>



<p>This is Scientific American’s 60 Second Science. I’m Pakinam Amer.</p>



<p>Whether you’re a pro gamer or you dip your toes in that world every once in a while, chances are you got stuck while playing a video game once, or was even gloriously defeated by one.</p>



<p>I know I have.</p>



<p>Maybe, in your frustration, you kicked the console a little. Maybe you took it out on the controllers or—if you’re an 80’s kid like me—made the joystick pay.</p>



<p>Now, a group of computer scientists from UberAI are taking revenge for all of us who’ve been in this situation before.&nbsp; &nbsp; &nbsp;</p>



<p>Using a family of simple algorithms, tagged ‘Go-Explore’, they went back and beat some of the most notoriously difficult Atari games whose chunky blocks of pixels and 8-bit tunes had once challenged, taunted and even enraged us.</p>



<p>&lt;swish&gt;</p>



<p>But what does revisiting those games from the 80s and 90s accomplish, besides fulfilling a childhood fantasy?</p>



<p>According to the scientists, who published their work in&nbsp;<em>Nature</em>, experimenting with solving video games that require complex, hard exploration gives way to better learning algorithms. They become more intelligent and perform better under real-world scenarios.</p>



<p><strong>Joost Huinzinga:&nbsp;</strong>One of the nice things of Go-Explore is that it&#8217;s not just limited to video games, but that you can also apply it to practical applications like robotics.</p>



<p>That was Joost Huinzinga, one of the principal researchers at UberAI. Joost developed Go-Explore with Adrien Ecoffet and other scientists.</p>



<p>So how does it actually work?</p>



<p>Let’s start with the basics. When AI processes images of the world in the form of pixels, it does not know which changes should count and which should be ignored. For instance, a slight change in the pattern of the clouds in the sky in a game environment is probably unimportant when exploring said game, but finding a missing key certainly is&nbsp;—butto the AI, both involve changing a few pixels in that world.</p>



<p>This is where deep reinforcement learning comes in. It’s an area of machine learning that helps an agent analyze an environment to decide what matters and which actions count through feedback signals in the form of extrinsic and intrinsic rewards.</p>



<p><strong>Joost Huinzinga:&nbsp;</strong>This is something that animals, basically, constantly do. You can imagine, if you touch a hot stove, you immediately get strong negative feedback like, ‘hey, this is something you shouldn&#8217;t do in the future.’ If you eat a bar of chocolates, assuming you like chocolates, you immediately get a positive feedback signal like, ‘hey, maybe I should seek out chocolate more in the future.’ The same is true for machine learning. These are problems where the agent has to take some actions, and then maybe it wins a game.</p>



<p>Creating an algorithm that can navigate rooms with traps, obstacles to jump over, rewards to collect and pitfalls to avoid, means that you have to create an artificial intelligence that is curious and that can explore an environment in a smart way.</p>



<p>This helps it decide what brings it closer to a goal, or how to collect hard-to-get treasures.</p>



<p>Reinforcement learning is great for that but it isn’t perfect in every situation.</p>



<p><strong>Joost Huinzinga:&nbsp;</strong>In practice, reinforcement learning works very well, if you have very rich feedback, if you can tell, ‘hey, this move is good, that move is bad, this move is good, that move is bad.’</p>



<p>In Atari games like Montezuma’s Revenge, the game environment offers little feedback and its rewards can intentionally lead to dead ends. Randomly exploring the space just doesn’t cut it.</p>



<p><strong>Joost Huinzinga:&nbsp;</strong>You could imagine, and this is especially true in video games like Montezuma&#8217;s revenge, that sometimes you have to take a lot of very specific actions, you have to dodge hazards, jump over enemies, you can imagine that random actions like, ‘hey, maybe I should jump here,’ in this new place, is just going to lead to a &#8216;Game Over&#8217; because that was a bad place to jump … especially if you&#8217;re already fairly deep into the game. So let&#8217;s say you want to explore level two, if you start taking random actions in level one and just randomly dying, you&#8217;re not going to make progress on exploring level two.</p>



<p>You can’t rely on ‘intrinsic motivation’ alone, which in the context of artificial intelligence typically comes from exploring new or unusual situations.</p>



<p><strong>Joost Huinzinga:&nbsp;</strong>Let&#8217;s say you have a robot and it can go left into the house and right into the house, let&#8217;s say at first it goes left, it explores left, meaning that it gets this intrinsic reward for a while. It doesn&#8217;t quite finish exploring left and at some point, the episode ends and it starts anew in the starting room. This time it goes right, it goes fairly far into the room on the right, it doesn&#8217;t quite explore it. And then it goes back to the starting room. Now the problem is because it has gone both left and right and basically it&#8217;s already seen the start, it no longer gets as much intrinsic motivation from going there.</p>



<p>In short, it stops exploring and counts that as a win.</p>



<p>Detaching from a place that was previously visited after collecting a reward doesn’t work in difficult games, because you might leave out important clues.</p>



<p>Go-Explore goes around this by&nbsp;<em>not&nbsp;</em>rewarding some actions, such as going somewhere new.</p>



<p>Instead, it encourages “sufficient exploration” of a space, with no or little hints, by enabling its agent to explicitly ‘remember’ promising places or states in a game.</p>



<p>Once the agent keeps a record of that state, it can then reload it and intentionally explore&#8211;what Adrien and Joost call, the “first return, then explore” principle.</p>



<p>According to Adrien, leaning on another form of learning called imitation learning, where agents can mimic human tasks, their AI can go a long way, especially in the field of robotics.</p>



<p><strong>Adrien Ecoffet:</strong>&nbsp;You have a difference between the world that you can train in and the real world. So one example would be if you&#8217;re doing robotics &#8230; you know, in robotics, it&#8217;s possible to have simulations of your robotics environments. But then, of course, you want your robot to run in the real world, right? And so what you can do, then? If you&#8217;re in a situation like that, of course, the simulation is not exactly the same as the environment, so just having something that works in simulation is not necessarily sufficient. We show that in our work … What we&#8217;re doing is that we&#8217;re using existing algorithms that are called ‘imitation learning’. And what it is, is it just takes an existing solution to a problem and just makes sure that you can reliably use that solution, even when, you know, there are slight variations in your environment, including, you know, it being the real world rather than a simulation.</p>



<p>Adrien and Joost say their model’s strength lies in its simplicity.</p>



<p>It can be adapted and expanded easily into real-life applications such as, language learning or drug design.</p>



<p>That was 60 Seconds Science, and this is Pakinam Amer. Thank you for listening.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/">Machine Learning Pwns Old-School Atari Games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-pwns-old-school-atari-games/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The ultimate game of chess: war games, machine learning, and artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 11 Feb 2021 08:39:00 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[chess]]></category>
		<category><![CDATA[games]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ultimate]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12846</guid>

					<description><![CDATA[<p>Source &#8211; https://www.dvidshub.net/ What does a computer have that humans don’t? The answer depends on who you ask. Those who build surveillance systems would say it’s near-perfect, objective recall. A robotics engineer might say it’s the ability to handle tedium. A team at Naval Information Warfare Center (NIWC) Pacific conducting research in wargaming says it’s <a class="read-more-link" href="https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/">The ultimate game of chess: war games, machine learning, and artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.dvidshub.net/</p>



<p>What does a computer have that humans don’t? The answer depends on who you ask. Those who build surveillance systems would say it’s near-perfect, objective recall. A robotics engineer might say it’s the ability to handle tedium.</p>



<p>A team at Naval Information Warfare Center (NIWC) Pacific conducting research in wargaming says it’s the ability to find winning moves in a game with more possible game combinations than there are particles in the observable universe.</p>



<p>Now they’re trying to discover how an artificial intelligence (AI) agent that uses machine learning (ML) to find those winning moves could help commanders make better tactical decisions.</p>



<p>“Gamifying” command and control</p>



<p>Dr. Doug Lange, senior science and technology manager for ML and AI at NIWC Pacific, likens it to a high-powered game of chess.</p>



<p>“Chess is a relatively simple game compared to warfare,” said Lange. “But tweak the game environment enough and it starts to look like a command and control problem.” NIWC Pacific wants to solve that problem by experimenting with games that mimic the complications of warfare.</p>



<p>One way to use AI for wargaming is to code decision processes into an AI agent, but those processes are products of the human mind, and so, prone to human error. “If I program an agent to play a game, its strategies for how to play are going to be very predictable because I’ve programmed them into it,” said Lange.</p>



<p>A better way to use AI for wargaming is to program into it the bare minimum: give it basic parameters about what types of moves are legal to do, then step back and let it teach itself. Then the agent can learn the value of different moves in the context of the game as it plays, a technique called reinforcement learning.</p>



<p>The benefit of using ML to develop wargaming AI is that the early moves don’t have to be good ones. They just have to be made repeatedly so the agent can accumulate knowledge about which moves lead to desirable outcomes. Eventually the agent, impervious to tedium and subjectivity, begins to paint a picture for optimal game strategy. And because an AI agent can play the game more times than a human could over their lifetime, that picture is better than one any human mind could paint.</p>



<p>This concept isn’t new. A London-based AI company and research laboratory has also demonstrated methods which result in superhuman ability for games such as chess and Go, an abstract strategy game believed to be the oldest board game still played today. Lange and his team at NIWC Pacific are attempting to build on this technology by creating reinforcement learning-based algorithms for military simulations.</p>



<p>In existing wargames created by industry, players must manage resources, synchronize forces, and engage in battles. The interface isn’t realistic because they’re often science fiction-based computer games, but they’re similar to the types of decisions military leaders make. The NIWC Pacific team began using these games to experiment with algorithms, then learned their allies, the U.K. and Australia, were doing the same.</p>



<p>Sharing solutions</p>



<p>The NIWC Pacific team began collaborating with the U.K. and Australia on ML in war games through The Technical Cooperation Program (TTCP), a “Five Eyes” partnership among Australia, Canada, New Zealand, the U.K. and the U.S. Their relationship with Australian and British researchers began during the Autonomy Strategic Challenge, which concluded in 2018, and continues through the AI Strategic Challenge, which begins in 2021 and concludes in 2023.</p>



<p>The U.K. and Australia have been developing and sharing computer games such as the U.K.’s “Hunting of the Plark,&#8221; an antisubmarine warfare game, and an Australian surface warfare game. NIWC Pacific developers were using ML to program AI agents to play these abstract war games, but the teams had been developing different AI and ML strategies for wargaming. Now they could share knowledge and, together, learn to win more challenging games.</p>



<p>Along the way, the team at NIWC Pacific also learned the U.S. Army Futures Command was interested in ML for wargaming. NIWC Pacific and their Army partners worked with their British and Australian allies to propose a Coalition Warfare Program (CWP) on wargaming, led by Lange and Dr. Keith Brawner, a senior engineer with the U.S. Army Combat Capabilities Development Command Soldier Center, as co-principle investigators. The CWP, an Office of the Executive Director for International Cooperation program, fosters international partnerships to advance research in technology gaps and helps strengthen strategic alliances.</p>



<p>The British and Australian warfare games’ distribution status made open communication possible but, although they add complexity missing in chess or Go, they only vaguely resemble military decision-making processes. The CWP will provide a secure structure and resources, starting in January 2021, to partner on more complex games built specifically for pursuit of real-world, high-fidelity decision support to warfighters.</p>



<p>When asked what that deliverable would look like, Lange described a war game with all-domain integration in well-defined, realistic military scenarios, with which an AI agent could help decision makers discover winning outcomes.</p>



<p>What it might look like is an AI agent, breaking the silence in a mission control center racked with indecision, by speaking up: “I have an insight.”</p>



<p>‘Making Kennedy real’</p>



<p>This is the plot of NIWC Pacific’s short film “Conflict 2037,” which outlines the Center’s technical vision for using ML capabilities to enhance naval operations. In a futuristic combat information center aboard a fictionalized ship, the ship’s AI agent, named Kennedy, uses its seamless access to virtual oceans of information to present recommendations to an admiral and her staff. They ask Kennedy follow-up questions before concurring with Kennedy’s recommendation.</p>



<p>The film’s star, AI agent Kennedy, is a reminder of what’s possible — and what will soon become crucial for maintaining a tactical edge in the information warfare domain. For the NIWC Pacific team and their CWP partners, it’s a reminder of their roles in building the technology needed to “make Kennedy real.” But NIWC Pacific’s research on wargaming is just one piece of the CWP, and just one part of the Center’s mission to amplify lethality in all domains.</p>



<p>“We want to foster significantly faster learning about war-winning, all-domain integration,” said Capt. Andrew Gainer, commanding officer of NIWC Pacific. “‘Kennedy’ is just an example, and technology is only part of the equation. Along with the infrastructure, policies, and budget needed for rapid capability development and experimentation, we require partners with a similar sense of urgency and understanding of the high-end maritime fight.”</p>



<p>Winning harder games</p>



<p>When it comes to “making Kennedy real,” the partnership with the U.K. and Australia could be a crucial piece in helping NIWC Pacific realize the Center’s vision depicted in “Conflict 2037.” As Lange explains, superiority in information warfare is a more complicated game than any one human mind can win.</p>



<p>“In chess, computers exhibit superhuman capabilities by using machine learning algorithms to learn to play the game. The best players in chess aren’t people, they’re programs,” said Lange. “When programs designed for wargaming are used in tandem with human intelligence, we’ll make the best warfighting decisions.”</p>



<p>But it’s the best naval, joint, and foreign players who are needed to build those programs. Through the wargaming CWP, NIWC Pacific has helped assemble the tools required for rapid innovation: the infrastructure, policies, and budget. In the coming years they’ll get to use these tools to further their research on war-winning AI. “Now the question becomes,” said Lange, “can we learn to play more complicated games?”</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/">The ultimate game of chess: war games, machine learning, and artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-ultimate-game-of-chess-war-games-machine-learning-and-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google installs Typhoon Studios to work on the development of Stadia games</title>
		<link>https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/</link>
					<comments>https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 21 Dec 2019 07:03:52 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[games]]></category>
		<category><![CDATA[Google]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5749</guid>

					<description><![CDATA[<p>Source: mashviral.com Google AI makes memory a gameTiernan Ray explains how DeepMind, a Google unit that develops ambitious artificial intelligence projects, found a way to stimulate the kind of long-term planning of risk and reward that humans do by turning memory into a game of actions and benefits futures. Google has announced the acquisition of <a class="read-more-link" href="https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/">Google installs Typhoon Studios to work on the development of Stadia games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: mashviral.com</p>



<p>Google AI makes memory a game<br>Tiernan Ray explains how DeepMind, a Google unit that develops ambitious artificial intelligence projects, found a way to stimulate the kind of long-term planning of risk and reward that humans do by turning memory into a game of actions and benefits futures. </p>



<p>Google has announced the acquisition of Typhoon Studios, a Canadian game developer now destined to join the Stadia Games and Entertainment team.</p>



<p>Financial details have not been disclosed.</p>



<p>In a blog post on Thursday, Jade Raymond, vice president and head of Google Stadia Games &amp; Entertainment, said the Typhoon Studios team, responsible for the development of the upcoming Journey to the Savage Planet cooperative, will continue to work on the game&#8217;s content until its launch on January 28, 2020.</p>



<p>Launched in 2017, Typhoon Studios is a small independent development team of 26 players, co-founded and led by Reid Schneider and Alex Hutchinson.</p>



<p>Schneider has previously worked at Splinter Cell and Batman, while Hutchinson has creative and design roles related to The Sims 2, Assassin &amp; # 39; s Creed III and Far Cry 4, as&nbsp;Gamesindustry.biz&nbsp;noted.</p>



<p>Raymond told the publication that the co-founders have &#8220;assembled an AAA team&#8221;, and this group of talents will give Google an &#8220;advantage&#8221; in the game development industry.</p>



<p>The company has previously raised $ 225,000 through a round of seeds, with investment provided by the Makers Fund.</p>



<p>&#8220;We are delighted to join the Google team to work with the Stadia Games and Entertainment team, making great games with great people!&#8221; Said the studio.</p>



<p>Typhoon Studios, led by their co-founders, will join the Stadia Games and Entertainment studio, based in Montreal, Canada.</p>



<p>The study was announced in October, and Raymond described the company as a place to create &#8220;exclusive and original content in a diverse portfolio of games in all your favorite genres.&#8221;</p>



<p>&#8220;Working with some of the best game creators in the world, we have learned that a successful study is reduced to great people who have a vision to execute the best ideas,&#8221; says Raymond. &#8220;We are always looking for people who share our passion and vision for the future of games.&#8221;</p>



<p><strong>TechRepublic:&nbsp;</strong>Do you want a job in video games? Look at these skills</p>



<p>Google is not the only company that explores the possible sources of revenue that cloud games have to offer. Earlier this week, Facebook confirmed the acquisition of PlayGiga, a Spanish startup focused on cloud-based game subscriptions.</p>



<p>The agreement is believed to have an approximate value of € 70 million ($ 78 million).</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/">Google installs Typhoon Studios to work on the development of Stadia games</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-installs-typhoon-studios-to-work-on-the-development-of-stadia-games/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Gaming Industry Is Revolutionising Artificial Intelligence, One Win At A Time</title>
		<link>https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/</link>
					<comments>https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 08 Sep 2018 09:35:38 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI development]]></category>
		<category><![CDATA[AI learning]]></category>
		<category><![CDATA[AI researchers]]></category>
		<category><![CDATA[ANN]]></category>
		<category><![CDATA[games]]></category>
		<category><![CDATA[Gaming Industry]]></category>
		<category><![CDATA[SVM]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2836</guid>

					<description><![CDATA[<p>Source &#8211; analyticsindiamag.com Today, artificial intelligence is dominating most of the games — from board games to interactive fiction games. They are providing complex, decision-making environments for AI to experiment with. The ability of games to provide interesting and complex problems, offering creativity and expression, has made them one of the most popular and meaningful domain for AI <a class="read-more-link" href="https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/">The Gaming Industry Is Revolutionising Artificial Intelligence, One Win At A Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; analyticsindiamag.com</p>
<p>Today, artificial intelligence is dominating most of the games — from board games to interactive fiction games. They are providing complex, decision-making environments for AI to experiment with. The ability of games to provide interesting and complex problems, offering creativity and expression, has made them one of the most popular and meaningful domain for AI researchers.</p>
<p>Games offer one of the most meaningful domains that can process, interpret and stimulate human behaviour. The current gaming industry is not only deploying better graphics but is also exploring the area of virtual gameplay. The two-way relationship of gaming and AI has begun to tread a new road and it can be said that the gaming industry is largely revolutionising the way AI works.</p>
<h3>AI In Gaming Industry</h3>
<p>Application of AI to the gaming industry can be dated back to 1956 by Arthur Samuel’s checkers program. Since its first application which could beat professional players to the present day’s AlphaGo, AI in gaming has come a long way.</p>
<p>Today we see an enormous upsurge of AI in game. <i>First Encounter Assault Recon</i>, popularly known as <i>F.E.A.R.</i> and <i>The Last Of Us</i> are some of the most popular games that give a very realistic experience with the use of AI.</p>
<h3>How Does Gaming Aid AI?</h3>
<p>Games are difficult because of the complexity and the skill that demands of them to play. This complexity of games makes it very desirable for AI to work on. A typical game has about 101685 possible states, whereas the number of protons in the observable universe are just of the order of 1080. This can tell about the degree to which the gaming industry is complicated and rich with data. And where there is plenty of data, AI is always a privilege. With larger sets of training data, AI would have the ability to be less predictable and more spontaneous, thereby making the gameinfinitely interesting and impulsive.</p>
<p><b>Interaction</b>:</p>
<p>As every game involves players, the interaction of the player with the game is advantageous to AI, as it gives access to the algorithm to study the player experience an emotional behaviour. The study of this game and human interaction proves a key to not only study the human behaviour, but it also makes a way for AI to build a better human-computer interaction system. It further pushes the AI boundaries to study and understand the human-computer interaction systems and address the challenges faced by its applications in the real world.</p>
<p><b>Decision-Making</b>:</p>
<p>This is the main crux of AI. AI must be able to make decisions by looking at the opponent’s action. There are various models used for decision-making in the game. Markov model is the most popular model. Fine State Machine (FSM) is one of the many AI methods used for decision-making.</p>
<p><b>Prediction Ability</b>:</p>
<p>Prediction involves anticipating the next move of the player, so that decision-making can be done based on it. This is done using methods like past-pattern recognition and random guess. Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Decision Tree Learning are the algorithms used for prediction. Regression algorithms are used for predicting player behaviour. This process includes situations like predicting times when the player is expected to be in the particular level of the game, what item will the player pick next, when will he move to the other lane, are made. Experimenting with this is virtual games, we implement these algorithms and models in the real world as well.</p>
<p><b>Intelligence</b>:</p>
<p>Social intelligence and human-computer interaction are the most supreme objectives of AI. These two things are taken into consideration by games and that way they help in AI development. Virtual characters exhibiting human behaviour as well as intelligence.</p>
<p>AI had learnt about the intelligence of computers the most from games, than from any other application, because they provide a virtual platform to test every kind of algorithm. Moreover, they also provide complicated mathematical problems to deal with, so the AI learning is not just restricted to the gaming world.</p>
<p>The success of deep Q-learning in learning to play arcade games with a human-level performance by just looking at and processing the pixels on the screen, is an example of intelligence. The study of intelligence within games not only lets us know more about human intelligence but also about AI intelligence.</p>
<p>The recent Dota2 tournament, ‘The International’, had bots competing with professional players. Although they couldn’t win the match, it must be noted that the ability that AI can be bestowed with, to play games as complicated as Dota2, is remarkable. Another example into the future of AI in games is at the Michigan State University, where a group of researchers have deployed AI to learn a game by learning from every player’s behaviour. It will adapt to individual player’s behaviour and play the next move.</p>
<p>Games offer both entertainment and interaction, in turn having a very high realisation of the affective loop which is very important in gaming. They provide a multitude of fancy features at once — visual art, sound design, graphic design, beautification, are narrative, virtual cinematography, all in one single software. Games are perfect testbeds for AI because they act as the best application of computer creativity. As a result, with the use of computational creativity in the gaming industry, provides a way to advance AI. It not only challenges computer creativity but also advances it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/">The Gaming Industry Is Revolutionising Artificial Intelligence, One Win At A Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-gaming-industry-is-revolutionising-artificial-intelligence-one-win-at-a-time/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
