<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>CHIPS Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/chips/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/chips/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 01 Mar 2021 06:53:04 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Google’s deep learning finds a critical path in AI chips</title>
		<link>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/</link>
					<comments>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 06:53:02 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[CHIPS]]></category>
		<category><![CDATA[Critical]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Google’s]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13136</guid>

					<description><![CDATA[<p>Source &#8211; https://www.zdnet.com/ The work marks a beginning in using machine learning techniques to optimize the architecture of chips. This month, Google unveiled to the world one <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Google’s deep learning finds a critical path in AI chips</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.zdnet.com/</p>



<p>The work marks a beginning in using machine learning techniques to optimize the architecture of chips.</p>



<p>This month, Google unveiled to the world one of those research projects, called Apollo, in a paper posted on the arXiv file server, &#8220;Apollo: Transferable Architecture Exploration,&#8221; and a companion blog post by lead author Amir Yazdanbakhsh. </p>



<p>Apollo represents an intriguing development that moves past what Dean hinted at in his formal address a year ago at the International Solid State Circuits Conference, and in his remarks to&nbsp;<em>ZDNet</em>.</p>



<p>In the example Dean gave at the time, machine learning could be used for some low-level design decisions, known as &#8220;place and route.&#8221; In place and route, chip designers use software to determine the layout of the circuits that form the chip&#8217;s operations, analogous to designing the floor plan of a building.</p>



<p>In Apollo, by contrast, rather than a floor plan, the program is performing what Yazdanbakhsh and colleagues call &#8220;architecture exploration.&#8221;&nbsp;</p>



<p>The architecture for a chip is the design of the functional elements of a chip, how they interact, and how software programmers should gain access to those functional elements.&nbsp;</p>



<p>For example, a classic Intel x86 processor has a certain amount of on-chip memory, a dedicated arithmetic-logic unit, and a number of registers, among other things. The way those parts are put together gives the so-called Intel architecture its meaning.</p>



<p>Asked about Dean&#8217;s description, Yazdanbakhsh told&nbsp;<em>ZDNet</em>&nbsp;in email, &#8220;I would see our work and place-and-route project orthogonal and complementary.</p>



<p>&#8220;Architecture exploration is much higher-level than place-and-route in the computing stack,&#8221; explained Yazdanbakhsh, referring to a presentation by Cornell University&#8217;s Christopher Batten. </p>



<p>&#8220;I believe it [architecture exploration] is where a higher margin for performance improvement exists,&#8221; said Yazdanbakhsh.</p>



<p>Yazdanbakhsh and colleagues call Apollo the &#8220;first transferable architecture exploration infrastructure,&#8221; the first program that gets better at exploring possible chip architectures the more it works on different chips, thus transferring what is learned to each new task.</p>



<p>The chips that Yazdanbakhsh and the team are developing are themselves chips for AI, known as accelerators. This is the same class of chips as the Nvidia A100 &#8220;Ampere&#8221; GPUs, the Cerebras Systems WSE chip, and many other startup parts currently hitting the market. Hence, a nice symmetry, using AI to design chips to run AI.</p>



<p>Given that the task is to design an AI chip, the architectures that the Apollo program is exploring are architectures suited to running neural networks. And that means lots of linear algebra, lots of simple mathematical units that perform matrix multiplications and sum the results.</p>



<p>The team define the challenge as one of finding the right mix of those math blocks to suit a given AI task. They chose a fairly simple AI task, a convolutional neural network called MobileNet, which is a resource-efficient network designed in 2017 by Andrew G. Howard and colleagues at Google. In addition, they tested workloads using several internally-designed networks for tasks such as object detection and semantic segmentation.&nbsp;</p>



<p>In this way, the goal becomes,&nbsp;<em>What are the right parameters for the architecture of a chip such that for a given neural network task, the chip meets certain criteria such as speed?</em></p>



<p>The search involved sorting through over 452 million parameters, including how many of the math units, called processor elements, would be used, and how much parameter memory and activation memory would be optimal for a given model.&nbsp;</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Google’s deep learning finds a critical path in AI chips</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</title>
		<link>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 11 Apr 2020 11:17:20 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[CHIPS]]></category>
		<category><![CDATA[NEUROMORPHIC]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8128</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is <a class="read-more-link" href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moore’s law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.</p>



<p>The first generation of AI was marked with defining rules and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second generation was populated by using deep learning networks to analyze the contents and data that were largely concerned with sensing and perception. The third generation is about drawing parallels to the human thought process, like interpretation and autonomous adaptation. In short, it mimics neurons spiking like the nervous system of humans. It relies on densely connected transistors that mimic the activity of ion channels. This allows them to integrate memory, computation, and communication, at higher speed, complexity, and better energy efficiency.</p>



<p>Loihi is Intel’s fifth-generation neuromorphic chip. This 14-nanometer chip has a 60-millimeter die size and contains over 2 billion transistors, as well as three managing Lakemont cores for orchestration. It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli. On March 16, Intel and Cornell University showcased a new system, demonstrating the ability of this chip to learn and recognize 10 hazardous materials from the smell. And this can function even in the presence of data noise and occlusion. According to their joint profiled paper in Nature Machine Intelligence, this can be used to detect the presence of explosives, narcotics, polymers and other harmful substances like signs of smoke, carbon monoxide, etc. It can purportedly do this faster, more accurate than sniffer dogs thereby threatening to replace them. They achieved this by training it constructing a circuit diagram of biological olfaction. They drew this insight by creating a dataset by exposing ten hazardous chemicals (including acetone, ammonia, and methane) through a wind tunnel, and a set consisting of the activity of 72 chemical sensors collected the signals.</p>



<p>This tech has multifold applications like identifying harmful substances in the airport, detecting the presence of diseases and toxic fumes in the air. The best part is, it constantly re-wires its internal network to allow different types of learning. The futuristic version can transform traditional computers into machines that can learn from experience and make cognitive decisions. Hence it is adaptive like human senses. And to put a cherry on top, it uses a fraction of energy than the current state of art systems in vogue. It is predicted to displace Graphics Processing Units (GPUs).</p>



<p>Although Loihi may soon evolve into a household word, it is not the only one. The neuromorphic approach is being investigated by IBM, HPE, MIT, Purdue, Stanford, and others. IBM is in the race with its TrueNorth. It has 4096 cores, each having 256 neurons and each neuron having 256 synapses to communicate with others. Germany’s Jülich Research Centre’s Institute of Neuroscience and Medicine and UK’s Advanced Processor Technologies Group at the University of Manchester are working on a low-grade supercomputer called SpiNNaker. It stands for Spiking Neural Network Architecture. It is believed to stimulate so-called cortical microcircuits, hence the human brain cortex and help us understand complex diseases like Alzheimer’s.</p>



<p>Who knows what sort of computational trends we may foresee in the coming years. But one thing is sure, the team at Analytics Insight will keep a close watch on it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
