<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>CHIP Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/chip/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/chip/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 16 Jun 2021 05:12:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster</title>
		<link>https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/</link>
					<comments>https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 16 Jun 2021 05:12:22 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[CHIP]]></category>
		<category><![CDATA[Computer]]></category>
		<category><![CDATA[Designed]]></category>
		<category><![CDATA[engineer]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[human]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14350</guid>

					<description><![CDATA[<p>Source &#8211; https://singularityhub.com/ AI has finally come full circle. A new suite of algorithms by Google Brain can now design computer chips—those specifically tailored for running AI software—that vastly <a class="read-more-link" href="https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/">A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://singularityhub.com/</p>



<p>AI has finally come full circle.</p>



<p>A new suite of algorithms by Google Brain can now design computer chips—those specifically tailored for running AI software—that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks- or months-long process that normally gums up digital innovation.</p>



<p>At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.</p>



<p>But game play was just these AI agents’ kindergarten training. More recently, they’ve grown to tackle new drugs for Covid-19, solve one of biology’s grandest challenges, and reveal secrets of the human brain.</p>



<p>In the new study, by crafting the hardware that allows it to run more efficiently, deep reinforcement learning is flexing its muscles in the real world once again. The team cleverly adopted elements of game play into the chip design challenge, resulting in conceptions that were utterly “strange and alien” to human designers, but nevertheless worked beautifully.</p>



<p>It’s not just theory. A number of the AI’s chip design elements were incorporated into Google’s tensor processing unit (TPU), the company’s AI accelerator chip, which was designed to help AI algorithms run more quickly and efficiently.</p>



<p>“That was our vision with this work,” said study author Anna Goldie. “Now that machine learning has become so capable, that’s all thanks to advancements in hardware and systems, can we use AI to design better systems to run the AI algorithms of the future?”</p>



<h3 class="wp-block-heading">The Science and Art of Chip Design</h3>



<p>I don’t generally think about the microchips in my phone, laptop, and a gazillion other devices spread across my home. But they’re the bedrock—the hardware “brain”—that controls these beloved devices.</p>



<p>Often no larger than a fingernail, microchips are exquisite feats of engineering that pack tens of millions of components to optimize computations. In everyday terms, a badly-designed chip means slow loading times and the spinning wheel of death—something no one wants.</p>



<p>The crux of chip design is a process called “floorplanning,” said Dr. Andrew Kahng, at the University of California, San Diego, who was not involved in this study. Similar to arranging your furniture after moving into a new space, chip floorplanning involves shifting the location of different memory and logic components on a chip so as to optimize processing speed and power efficiency.</p>



<p>It’s a horribly difficult task. Each chip contains millions of logic gates, which are used for computation. Scattered alongside these are thousands of memory blocks, called macro blocks, which save data. These two main components are then interlinked through tens of miles of wiring so the chip performs as optimally as possible—in terms of speed, heat generation, and energy consumption.</p>



<p>“Given this staggering complexity, the chip-design process itself is another miracle—in which the efforts of engineers, aided by specialized software tools, keep the complexity in check,” explained Kahng. Often, floorplanning takes weeks or even months of painstaking trial and error by human experts.</p>



<p>Yet even with six decades of study, the process is still a mixture of science and art. “So far, the floorplanning task, in particular, has defied all attempts at automation,” said Kahng. One estimate shows that the number of different configurations for just the placement of “memory” macro blocks is about 10<sup>2,500</sup>—magnitudes larger than the number of stars in the universe.</p>



<h3 class="wp-block-heading">Game Play to the Rescue</h3>



<p>Given this complexity, it seems crazy to try automating the process. But Google Brain did just that, with a clever twist.</p>



<p>If you think of macro blocks and other components as chess pieces, then chip design becomes a sort of game, similar to those previously mastered by deep reinforcement learning. The agent’s task is to sequentially place macro blocks, one by one, onto a chip in an optimized manner to win the game. Of course, any naïve AI agent would struggle. As background learning, the team trained their agent with over 10,000 chip floorplans. With that library of knowledge, the agent could then explore various alternatives.</p>



<p>During the design, it worked with a type of “trial-and-error” process that’s similar to how we learn. At any stage of developing the floorplan, the AI agent assesses how it’s doing using a learned strategy, and decides on the most optimal way to move forward—that is, where to place the next component.</p>



<p>“It starts out with a blank canvas, and places each component of the chip, one at a time, onto the canvas. At the very end it gets a score—a reward—based on how well it did,” explained Goldie. The feedback is then used to update the entire artificial neural network, which forms the basis of the AI agent, and get it ready for another go-around.</p>



<p>The score is carefully crafted to follow the constraints of chip design, which aren’t always the same. Each chip is its own game. Some, for example, if deployed in a data center, will need to optimize power consumption. But a chip for self-driving cars should care more about latency so it can rapidly detect any potential dangers.</p>



<h3 class="wp-block-heading">The Bio-Chip</h3>



<p>Using this approach, the team didn’t just find a single chip design solution. Their AI agent was able to adapt and generalize, needing just six extra hours of computation to identify optimized solutions for any specific needs.</p>



<p>“Making our algorithm generalize across these different contexts was a much bigger challenge than just having an algorithm that would work for one specific chip,” said Goldie.</p>



<p>It’s a sort of “one-shot” mode of learning, said Kahng, in that it can produce floorplans “superior to those developed by human experts for existing chips.” A main throughline seemed to be that the AI agent laid down macro blocks in decreasing order of size. But what stood out was just how alien the designs were. The placements were “rounded and organic,” a massive departure from conventional chip designs with angular edges and sharp corners.</p>



<p>Human designers thought “there was no way that this is going to be high quality. They almost didn’t want to evaluate them,” said Goldie.</p>



<p>But the team pushed the project from theory to practice. In January, Google integrated some AI-designed elements into their next-generation AI processors. While specifics are being kept under wraps, the solutions were intriguing enough for millions of copies to be physically manufactured.</p>



<p>The team plans to release its code for the broader community to further optimize—and understand—the machine’s brain for chip design. What seems like magic today could provide insights into even better floorplan designs, extending the gradually-slowing (or dying) Moore’s Law to further bolster our computational hardware. Even tiny improvements in speed or power consumption in computing could make a massive difference.</p>



<p>“We can…expect the semiconductor industry to redouble its interest in replicating the authors’ work, and to pursue a host of similar applications throughout the chip-design process,” said Kahng.</p>



<p>“The level of the impact that [a new generation of chips] can have on the carbon footprint of machine learning, given it’s deployed in all sorts of different data centers, is really valuable. Even one day earlier, it makes a big difference,” said Goldie.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/">A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-google-ai-designed-a-computer-chip-as-well-as-a-human-engineer-but-much-faster/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Proposes AI as Solution for Speedier AI Chip Design</title>
		<link>https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/</link>
					<comments>https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 07 Apr 2020 06:51:34 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[CHIP]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Google]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8003</guid>

					<description><![CDATA[<p>Source: allaboutcircuits.com Considering that thousands of components must be packed onto a tiny fingernail-sized chip, this can be difficult. The trouble is that it can take several <a class="read-more-link" href="https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/">Google Proposes AI as Solution for Speedier AI Chip Design</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: allaboutcircuits.com</p>



<p>Considering that thousands of components must be packed onto a tiny fingernail-sized chip, this can be difficult. The trouble is that it can take several years to design a chip, and the world of machine learning and artificial intelligence (AI) moves much faster than this.</p>



<p>In an ideal world, you want a chip that is designed quickly enough to be optimized for today’s AI challenges, not the AI challenges of several years ago.&nbsp;</p>



<p>Now, Alphabet’s Google has proposed an AI solution that could advance the internal development of its own chips. The solution? To train AI chips to design themselves.&nbsp;</p>



<h3 class="wp-block-heading">Shortening the AI Chip Design Cycle</h3>



<p>In a research paper posted to Arxiv on March 23, it is described how the researchers “believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fuelling advances in the other,”</p>



<p>The research describes how a machine learning program can be used to make decisions about how to plan and layout a chip’s circuitry, with the final design being just good as or better than manmade ones.&nbsp;&nbsp;</p>



<p>According to Jeff Dean, Google’s head of AI research, this program is currently being used internally for exploratory chip design projects. The company is already known for developing a family of AI hardware over the years, including its Tensor Processing Unit (TPU) for processing AI in its servers. </p>



<h3 class="wp-block-heading">The Chip Design Challenge</h3>



<p>Planning a chip’s circuitry, often referred to as “placement” or “floor planning”, is very time-consuming. And as chips continually improve, final designs very quickly become outdated and despite being designed to last two-to-five years, there is constant pressure and demand on engineers to reduce the time between upgrades.&nbsp;</p>



<p>Floorplanning involves placing logic and memory blocks, or clusters of, in a way that maximizes power and performance while concurrently minimizing footprint. This is already challenging enough, however, the process is made all the more challenging by the fact that this must all take place while rules about the density of interconnects are followed at the same time.&nbsp;</p>



<p>Even with today’s advanced tools and processes, human engineers require weeks of time and multiple iterations to produce an acceptable design for an AI chip.</p>



<h3 class="wp-block-heading">Using AI for Chip Floor Planning</h3>



<p>However, Google’s research is said to have made major improvements to this process. In the Arxiv paper, research engineers Anna Goldie and Azalia Mirhoseini claim to have designed an algorithm that learns how to achieve optimum placement of chip circuitry. It does this by studying existing chip designs in order to produce its own.&nbsp;</p>



<p>According to Goldie and Mirhoseini, it is able to do this in a fraction of the time currently required by human designers and is capable of analyzing millions of design possibilities as opposed to thousands. This enables it to spit out chip designs that not only utilize the latest developments but are cheaper and smaller, too.</p>



<h4 class="wp-block-heading">Repeated Tasks Result&nbsp;in Higher Performance</h4>



<p>During their research, the duo modeled chip placement as a reinforcement learning problem. These systems, unlike conventional deep learning ones, learn by doing rather than training on a large dataset. They adjust the parameters in their networks according to a “reward signal” that is sent when they succeed in a task.</p>



<p>In the case of chip design, the reward signal is a combined measure of power reduction, area reduction, and performance improvement. As a result, the program becomes better at its task the more times it does it.&nbsp;</p>



<h3 class="wp-block-heading">A Solution to Moore&#8217;s Law&nbsp;</h3>



<p>If this research is as promising as Google’s researchers would have us believe, it could represent a solution to Moore’s Law—the assertion that the number of transistors on a chip doubles every one-to-two years—by ensuring the continuation of it. In the 1970s, chips generally had a few thousand transistors. Today, some host billions of them.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/">Google Proposes AI as Solution for Speedier AI Chip Design</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-proposes-ai-as-solution-for-speedier-ai-chip-design/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
