<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>COMPUTER HARDWARE Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/computer-hardware/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/computer-hardware/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 05 Oct 2020 11:21:36 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>TensorFlow Quantum Boosts Quantum Computer Hardware Performance</title>
		<link>https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/</link>
					<comments>https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Oct 2020 11:21:26 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Boosts]]></category>
		<category><![CDATA[COMPUTER HARDWARE]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Quantum]]></category>
		<category><![CDATA[TensorFlow]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11948</guid>

					<description><![CDATA[<p>Source: marktechpost.com Google recently released TensorFlow Quantum, a toolset for combining state-of-the-art machine learning techniques with quantum algorithm design. This is an essential step to build tools for <a class="read-more-link" href="https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/">TensorFlow Quantum Boosts Quantum Computer Hardware Performance</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: marktechpost.com</p>



<p>Google recently released TensorFlow Quantum, a toolset for combining state-of-the-art machine learning techniques with quantum algorithm design. This is an essential step to build tools for developers working on quantum applications.</p>



<p>Simultaneously, they have focused on improving quantum computing hardware performance by integrating a set of quantum firmware techniques and building a TensorFlow-based toolset working from the hardware level up – from the bottom of the stack.</p>



<p>The fundamental driver for this work is tackling the noise and error in quantum computers. Here’s a small overview of the above and how the impact of noise and imperfections (critical challenges) is suppressed in quantum hardware. </p>



<p><strong>Noise And Error: The Chinks In Armor When It Comes To Quantum Computers</strong></p>



<p>Quantum computing combines information processing and quantum physics to solve challenging computer problems. However, a significant issue in quantum computers is susceptibility to noise and error, limiting quantum computing hardware efficiency. Noise refers to all sorts of things that can cause interference, like the electromagnetic signals from the WiFi or disturbances in the Earth’s magnetic field. Most quantum computing hardware can run just a few dozen calculations over much less than 1 ms before requiring a reset due to the noise’s influence. That is about 1024 times worse than the hardware in a laptop.</p>



<p>Many teams have been working to make the hardware resistant to the noise to overcome these weaknesses. Many theorists have also designed a smart algorithm called Quantum Error Correction. QEA can identify and fix errors in the hardware, but it is very slow or incapable of practice. Because the information is to be spread in one qubit over lots of qubits, it may take a thousand or more physical qubits to realize just one error-corrected “logical qubit.”</p>



<p>To overcome this, Q-CTRL’s “quantum firmware” can stabilize the qubits against noise and decoherence without the need for extra resources. This is done by adding the new solutions that improve the hardware’s robustness to the error at the lowest layer of the quantum computing stack.</p>



<p>The protocols described by the Quantum firmware are there to deliver the quantum hardware with augmented performance to higher levels of the abstraction in the quantum computing stack.</p>



<p>In general, quantum computing hardware relies on light-matter interaction, which is made to enact quantum logic operations.</p>



<p> </p>
<p>The post <a href="https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/">TensorFlow Quantum Boosts Quantum Computer Hardware Performance</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/tensorflow-quantum-boosts-quantum-computer-hardware-performance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A beginner’s guide to the AI apocalypse: Artificial stupidity</title>
		<link>https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/</link>
					<comments>https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 18 Jul 2020 07:03:41 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI apocalypse]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[COMPUTER HARDWARE]]></category>
		<category><![CDATA[NICK BOSTROM]]></category>
		<category><![CDATA[Tech]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10283</guid>

					<description><![CDATA[<p>Source: thenextweb.com Welcome to the latest article in TNW’s guide to the AI apocalypse. In this series we’ll examine some of the most popular doomsday scenarios prognosticated by <a class="read-more-link" href="https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/">A beginner’s guide to the AI apocalypse: Artificial stupidity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thenextweb.com</p>



<p>Welcome to the latest article in TNW’s guide to the AI apocalypse. In this series we’ll examine some of the most popular doomsday scenarios prognosticated by modern AI experts. </p>



<p>In this edition we’re going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity.</p>



<p>But first, a few words about humans.</p>



<p>You won’t find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history.</p>



<p>Luckily we’re still the smartest species on the planet, so we’ve managed to remain in charge for a long time despite our shortcomings. Unfortunately a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence isn’t as far from challenging our status as the apex intellects as you might think.</p>



<p>The experts will tell you that we’re&nbsp;<em>really</em>&nbsp;far away from human-level AI (HLAI). But maybe that’s because nobody’s quite sure what the benchmark for that would be. What should “a human” be able to do? Can you play the guitar? I can. Can you play the piano? I can’t.</p>



<p>Sure, you can argue that a human-level AI should be able to&nbsp;<em>learn</em>&nbsp;to play the guitar or the piano, just like a human can – many play both. But the point is that measuring human ability isn’t a cut-and-dry endeavor.</p>



<p>Computer scientist Roman Yampolskiy, of the university of Louisville, recently published a paper discussing this exact concept. He writes:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>Imagine that tomorrow a prominent technology company announces that they have successfully created an Artificial Intelligence (AI) and offers for you to test it out.</p><p>You decide to start by testing developed AI for some very basic abilities such as multiplying 317 by 913, and memorizing your phone number. To your surprise, the system fails on both tasks.</p><p>When you question the system’s creators, you are told that their AI is human-level artificial intelligence (HLAI) and as most people cannot perform those tasks neither can their AI. In fact, you are told, many people can’t even compute 13 x 17, or remember name of a person they just met, or recognize their coworker outside of the office, or name what they had for breakfast last Tuesday.</p><p>The list of such limitations is quite significant and is the subject of study in the field of Artificial Stupidity.</p></blockquote>



<p>Trying to define what HLAI should and shouldn’t be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?</p>



<p>This line of reasoning usually swings the conversation to&nbsp;<em>narrow intelligence&nbsp;</em>versus&nbsp;<em>general intelligence.</em>&nbsp;But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.</p>



<p>Humans don’t work that way however. There’s no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.</p>



<p>And that’s cause for concern. We don’t consider which ants are most talented when we wreck an anthill to build a softball field, why should our intellectual superiors?</p>



<p>The good news is that most serious AI experts don’t think GAI will happen anytime soon, so the most we’ll have to deal with is whatever fuzzy definition of HLAI the person or company who claims it comes up with. Much like Google decided it had achieved quantum supremacy by coming up with an arbitrary (and disputed) benchmark, it’ll surprise nobody in the industry if, for example, the AI crew at Facebook determines that a specific translation algorithm they’ve invented meets their self-imposed criteria for HLAI (or something like that). Maybe it’ll be Amazon or OpenAI.</p>



<p>The bad news is that you also won’t find many reputable scientists willing to rule GAI out. And that means we could be an “eureka!” or two away from someone like Ian Goodfellow oopsing up an algorithm that ties general intelligence to hardware. And when that happens, we could be looking at Bostrom‘s Paperclip Maximizer in full effect. In other words: the robots won’t kill us out of spite, they’ll just forget we exist and transform the world and its habitats to suit their needs just as we did.</p>



<p>That’s one theory anyway. And, as with any potential extinction scenario, it’s important to have a plan to stop it. Based on the fact that we can’t know exactly what’s going to happen once a superintelligent artificial being emerges, we should probably just start hard-coding “artificial stupidity” into the mix.</p>



<p>The right dose of unwavering limitations – think Asimov’s Laws of Robotics but more specific to the number of parameters or compute a specific model can use and what level of network integration can exist between disparate systems — could spell the difference between our existence and extinction.</p>



<p>So, rather than attempting to program advanced AI with a philosophical view on the sanctity of human life and what constitutes the greater good, we should just hamstring them with artificial stupidity from the start.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/">A beginner’s guide to the AI apocalypse: Artificial stupidity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-beginners-guide-to-the-ai-apocalypse-artificial-stupidity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
