<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Video Games Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/video-games/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/video-games/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 03 Apr 2020 06:57:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>A.I. CAN NOW BEAT EVERY TITLE OF THIS ICONIC 1977 VIDEO GAME CONSOLE</title>
		<link>https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/</link>
					<comments>https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 03 Apr 2020 06:57:22 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Video Games]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7919</guid>

					<description><![CDATA[<p>Source:inverse.com You might think you&#8217;re good at Atari games, but a new artificial intelligence system from Alphabet subsidiary DeepMind called Agent57 is probably better. The company claims <a class="read-more-link" href="https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/">A.I. CAN NOW BEAT EVERY TITLE OF THIS ICONIC 1977 VIDEO GAME CONSOLE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:inverse.com</p>



<p>You might think you&#8217;re good at Atari games, but a new artificial intelligence system from Alphabet subsidiary DeepMind called Agent57 is probably better. The company claims its A.I. can beat the average person on all 57 Atari 2600 games.</p>



<p>Agent57 uses a type of machine learning called deep reinforcement learning to learn from its mistakes and get better at playing the games. A research paper that was published by DeepMind on Tuesday explains why games are a great way to test out how A.I.</p>



<p> &#8220;Games are an excellent testing ground for building adaptive algorithms: they provide a rich suite of tasks which players must develop sophisticated behavioral strategies to master, but they also provide an easy progress metric – game score – to optimize against,&#8221; the paper reads. &#8220;The ultimate goal is not to develop systems that excel at games, but rather to use games as a stepping stone for developing systems that learn to excel at a broad set of challenges.&#8221;‌ </p>



<p>DeepMind used the same type of machine learning to develop its A.I. system AlphaGo, which beat the 33-year-old grandmaster of the ancient Chinese game Go, Lee Sedol, at the game four out of five times in 2016. When AlphaGo won the first round against Sedol, Elon Musk commented that experts believe it would be a decade before A.I. could achieve such a feat.</p>



<p>Some of the most challenging games Agent57 had to tackle were Montezuma’s Revenge, Pitfall, Solaris, and Skiing. Other A.I. systems have had a difficult time with those games, but Agent57 did better than any A.I. has been able to before and exceeded the performance of the average person for the first time.</p>



<p>Pitfall and Montezuma&#8217;s Revenge are difficult for A.I. because they require a lot of strategy. The games Solaris and Skiing are difficult for A.I. because it takes time to determine how your actions affected your overall performance, so it&#8217;s hard for A.I. to learn from its mistakes. Agent57 was able to outperform humans despite these challenges.</p>



<p>&#8220;With Agent57, we have succeeded in building a more generally intelligent agent that has above-human performance on all tasks in the Atari57 benchmark,&#8221; the paper reads. &#8220;Agent57 was able to scale with increasing amounts of computation: the longer it trained, the higher its score got. While this enabled Agent57 to achieve strong general performance, it takes a lot of computation and time; the data efficiency can certainly be improved.&#8221;</p>



<p>The Atari 2600 was released in 1977, and millions of consoles were sold by 1980. The Atari changed gaming forever, and its games maintain a large fanbase to this day. Iconic games like Pitfall!, Missile Command, Space Invaders, <em>Asteroids</em> and more are still played by people around the world, but they&#8217;re usually playing them on their computer. If you want to buy an actual Atari 2600, you&#8217;ll have to shell out around $60 on eBay.</p>



<p>A.I. is developing quickly, and it&#8217;s cathartic to see it advancing by playing old video games we all know and love. When A.I. can beat us at Super Smash Bros., then we&#8217;ll start worrying.</p>



<p>THE INVERSE ANALYSIS</p>



<p>It&#8217;s pretty incredible how much DeepMind&#8217;s A.I. has developed in a relatively short time. We&#8217;re curious what games this A.I. might master next and how it will be applied when they move on from dominating video games. As we&#8217;ve reported, A.I. is already capable of diagnosing cancer and predicting the weather, so there&#8217;s no telling what it&#8217;ll be able to do in the years to come. Hopefully, Elon Musk&#8217;s nightmares don&#8217;t come true and it ends up killing all of us. </p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/">A.I. CAN NOW BEAT EVERY TITLE OF THIS ICONIC 1977 VIDEO GAME CONSOLE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-i-can-now-beat-every-title-of-this-iconic-1977-video-game-console/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A.I. Mastered Backgammon, Chess and Go. Now It Takes On StarCraft II</title>
		<link>https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/</link>
					<comments>https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 04 Nov 2019 06:44:37 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Games and Competition]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Video Games]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4972</guid>

					<description><![CDATA[<p>Source: smithsonianmag.com Last January, during a livestream on YouTube and Twitch, professional StarCraft II player Grzegorz “MaNa” Komincz from Poland struck a blow for humankind when he defeated a multi-million-dollar artificial <a class="read-more-link" href="https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/">A.I. Mastered Backgammon, Chess and Go. Now It Takes On StarCraft II</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: smithsonianmag.com</p>



<p>Last January, during a livestream on YouTube and Twitch, professional StarCraft II player Grzegorz “MaNa” Komincz from Poland struck a blow for humankind when he defeated a multi-million-dollar artificial intelligence agent known as AlphaStar, designed specifically to pummel human players in the popular real-time strategy game.</p>



<p>The public loss in front of tens of thousands of eSports fans was a blow for Google parent company Alphabet’s London-based artificial intelligence subsidiary, DeepMind, which developed AlphaStar. But even if the A.I. lost the battle, it had already won the war; a previous iteration had already defeated Komincz five times in a row and wiped the floor with his teammate, Dario “TLO” Wünsch, showing that AlphaStar had sufficiently mastered the video game, which machine learning researchers have chosen as a benchmark of A.I. progress.</p>



<p>In the months since, AlphaStar has only grown stronger and is now able to defeat 99.8 percent of StarCraft II players online, achieving Grandmaster rank in the game on the official site Battle.net, a feat described today in a new paper in the journal Nature.</p>



<p> Back in 1992, IBM first developed a rudimentary A.I. that learned to become a better backgammon player through trial and error. Since then, new A.I. agents have slowly but surely dominated the world of games, and the ability to master beloved human strategy games has become one of the chief ways artificial intelligence is assessed.</p>



<p>In 1997, IBM’s DeepBlue beat Gary Kasparov, the world’s best chess player, launching the era of digital chess supremacy. More recently, in 2016, Deepmind’s AlphaGo beat the best human players of the Chinese game Go, a complex board game with thousands of possible moves each turn that some believed A.I. would not crack for another century. Late last year, AlphaZero, the next iteration of the A.I., not only taught itself to become the best chess player in the world in just four hours, it also mastered the chess-like Japanese game Shogi in two hours as well as Go in just days.</p>



<p>While machines could probably dominate in games like Monopoly or Settlers of Catan, A.I. research is now moving away from classic board games to video games, which, with their combination of physical dexterity, strategy and randomness can be much harder for machines to master.</p>



<p>“The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge,” David Silver, principal research scientist at DeepMind says in a statement. “The game’s complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 10<sup>26</sup>&nbsp;possible choices for every move; and players have less information about their opponents than in poker.”</p>



<p>David Churchill, a computer scientist at the Memorial University of Newfoundland who has run an annual StarCraft A.I. tournament for the last decade and served as a reviewer for the new paper, says a game like chess plays into an A.I.’s strengths. Each player takes a turn and each one has as long as possible to consider the next move. Each move opens up a set of new moves. And each player is in command of all the information on the board—they can see what their opponent is doing and anticipate their next moves.</p>



<p>“StarCraft completely flips all of that. Instead of alternate move, it’s simultaneous move,” Churchill says. “And there’s a ‘fog of war’ over the map. There’s a lot going on at your opponent’s base that you can’t see until you have scouted a location. There’s a lot of strategy that goes into thinking about what your opponent could have, what they couldn’t have and what you should do to counteract that when you can’t actually see what&#8217;s happening.”</p>



<p>Add to that the fact that there can be 200 individual units on the field at any given time in StarCraft II, each with hundreds of possible actions, and the variables become astronomical. “It’s a way more complex game,” Churchill says. “It’s almost like playing chess while playing soccer.”</p>



<p>Over the years, Churchill has seen A.I. programs that could master one or two elements of StarCraft fairly well, but nothing could really pull it all together. The most impressive part of AlphaStar, he says, isn’t that it can beat humans; it’s that it can tackle the game as a whole.</p>



<p>So how did DeepMind’s A.I. go from knocking over knights and rooks to mastering soccer-chess with laser guns? Earlier A.I. agents, including DeepMind’s FTW algorithm which earlier this year studied teamwork while playing the video game Doom III, learned to master games by playing against versions of themselves. However, the two machine opponents were equally matched and equally aggressive algorithms. Because of that, the A.I. only learned a few styles of gameplay. It was like matching Babe Ruth against Babe Ruth; the A.I. learned how to handle home runs, but had less success against singles, pop flies and bunts.</p>



<p>The DeepMind team decided that for AlphaStar, instead of simply learning by playing against high-powered versions of itself, it would train against a group of A.I. systems they dubbed the League. While some of the opponents in the League were hell-bent on winning the game, others were more willing to take a walloping to help expose weaknesses in AlphaStar’s strategies, like a practice squad helping a quarterback work out plays.</p>



<p>That strategy, combined with other A.I. research techniques like imitation learning, in which AlphaStar analyzed tens of thousands of previous matches, appears to work, at least when it comes to video games.</p>



<p>Eventually, DeepMind believes this type of A.I. learning could be used for projects like robotics, medicine and in self-driving cars. “AlphaStar advances our understanding of A.I. in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed,” Oriol Vinyals, DeepMind research scientist and lead author of the new paper says in a statement. “I’m excited to begin exploring ways we can apply these techniques to real-world challenges.”</p>



<p>While AlphaStar is an incredible advance in AI, Churchill thinks it still has room for improvement. For one thing, he thinks there are still humans out there that could beat the AlphaStar program, especially since the A.I. needs to train on any new maps added to the game, something he says human players can adapt to much more quickly. “They’re at the point where they’ve beaten sort of low-tier professional human players. They’re essentially beating benchwarmers in the NBA,” he says. “They have a long way to go before they’re ready to take on the LeBron James of StarCraft.”</p>



<p>Time will tell if DeepMind will develop more techniques that make AlphaStar even better at blasting digital aliens. In the meantime, the company’s various machine learning projects have been challenging themselves against more earthly problems like figuring out how to fold proteins, decipher ancient Greek texts, and learning how to diagnose eye diseases as well or better than doctors.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/">A.I. Mastered Backgammon, Chess and Go. Now It Takes On StarCraft II</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-i-mastered-backgammon-chess-and-go-now-it-takes-on-starcraft-ii/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
