<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>artificial general intelligence Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/artificial-general-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/artificial-general-intelligence/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 07 Jun 2019 07:00:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Humans&#8217; Fascination with Artificial General Intelligence</title>
		<link>https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Jun 2019 07:00:56 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[CAIS]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[iRobot]]></category>
		<category><![CDATA[Neuralink]]></category>
		<category><![CDATA[TED Talk]]></category>
		<category><![CDATA[transhumanism]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3599</guid>

					<description><![CDATA[<p>Source:- informationweek.com Recently I was asked by my company to develop a presentation for staff on the origins, present state and plausible future outcomes for artificial intelligence. This <a class="read-more-link" href="https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/">Humans&#8217; Fascination with Artificial General Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- informationweek.com</p>
<p>Recently I was asked by my company to develop a presentation for staff on the origins, present state and plausible future outcomes for artificial intelligence. This is in keeping with my position as the global lead for our AI Center of Excellence. And that process led to an exploration of Artificial General Intelligence (AGI), when it might arrive and the implications for better or worse.</p>
<p>New artificial intelligence capabilities appear every day. In a single day just recently, an avid reader would have found articles about how AI might one day help us to predict earthquakes, how wearable AI will amplify human intelligence, how the technology is being used to create new alloys for 3D printing, how it is changing agriculture, and more. On that day, there were at least a dozen such headlines about how AI is transforming industry and society.</p>
<p>All this stems from “narrow” AI, algorithms that, while powerful, are only able to do one thing, such as play chess, determine the probability that an oil drill bit is about to fail, or more intelligently route calls to service center agents.</p>
<p>While narrow AI applications certainly appear intelligent, their functionality is limited to their specific programming.  For example, if you ask an AI-powered digital assistant to turn on the lights, the natural language processing algorithm identifies certain keywords such as “lights” and “on” and then responds by turning on the lights. That may appear to be a human-like intelligence, but these systems are only responding to programming. At the end of the day, the digital assistant doesn’t understand what is being said in the way that a person does. In the same way, a chess-playing AI can’t recognize images or direct you from point A to point B.</p>
<p>The goal has long been to develop AI<em> </em>to the point where the machine&#8217;s intellectual capability is functionally equivalent to a human &#8212; that it learns and thinks much as person. This is artificial general intelligence. AGI does not yet exist, even though this is what was discussed 63 years ago at the famous Dartmouth conference where the term artificial intelligence was originated. As stated in a Smithsonian article, “What the scientists were talking about in their sylvan hideaway was how to build a machine that could think.”</p>
<p>AGI is vastly different from AI today insofar as it will take on more human-like characteristics and can transfer knowledge from one domain to another as needed. In other words, AGI will be able to make connections and learn how to learn, to generalize and acquire new skills the way humans do. In theory, this could lead to an AGI that could carry out any task a human could. This is widely thought of as the Holy Grail in AI. At the very least, an AGI would be able to combine human-like thinking with the mind-boggling speed of computers, leading to advantages such as near-instant recall and millisecond number crunching.</p>
<p>Much like our inability to fully understand how the brain operates, the complexity of developing this technology remains beyond our grasp. There are AI experts who don’t believe AGI will ever be achieved, or at least not for another hundred years or more. Nevertheless, a survey of these experts revealed a median estimate for AGI of 2040. That’s only a single generation into the future.</p>
<p>Many companies are working towards AGI. For example, there are claims that that DeepMind, a division of Google parent Alphabet, has already developed an early form though there are no current meaningful examples in widespread use. At Google I/O, Google’s AI lead, Jeff Dean, stated that they are looking at &#8220;AI that can work across disciplines.&#8221; Will it really take Google or DeepMind or another 20 years or more to develop AGI, or might this be much closer than predicted?</p>
<p>As with all technology, AI arises from the human mind and our collective knowledge. Yet, much of human invention comes from moments of insight, unexpected illumination, enlightenment, genius and even serendipity. While incremental gains may ultimately lead to AGI, it’s the unexpected path that will likely lead to an AGI breakthrough, and the timeline is entirely unpredictable.</p>
<p>Once AGI exists, what happens to humans? The thought of creating consciousness and advanced intelligence has long been the stuff of nightmares, from Frankenstein to HAL 9000 and the Terminator. As explained by neuroscientist and philosopher Sam Harris, there is an implicit existential danger in such a development. In his TED Talk, he describes how AGI is surely inevitable and that while we may view this as cool, we should be scared.</p>
<p>Harris adds that the AGI future depicted in science fiction movies such as Ex Machina is often seen as fun, engaging, escapist and entertaining. In his view, however, when these plots become real life, the gains we will make with intelligent machines could ultimately destroy us. He warns that we are so far unable to marshal an appropriate emotional response to the dangers ahead. In effect, he says that we are transfixed, like moths drawn to a flame, fascinated by the curious light without thought to the implications of our actions. If, as The New Yorker asks, the arc of the universe bends toward an intelligence sufficient to understand it, will an AGI be the solution, or the end of the [human] experiment?</p>
<p>While AGI may not be far into the future, there are those who disagree. Rodney Brooks, roboticist and co-founder of iRobot, believes this won’t be seen until the year 2300. In arguing that AGI has been delayed, his view is “if AGI is a long way off then we cannot say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and it when it does show up it will be in a world that we cannot yet predict.” There are also those who think AGI will take a different form, that narrow AI will continue to be developed to the point where the collection of algorithms forms Comprehensive AI Services (CAIS) to the point where they will resemble a general intelligence.</p>
<p><strong>Is our species destined for transhumanism? </strong></p>
<p>Ultimately, there’s no way of knowing just when AGI will appear or in what manner. It could take until 2300 or could happen tomorrow with some yet unannounced and seemingly miraculous achievement. One thing that everyone seems to agree upon is the inherent risk to humanity.</p>
<p>That has led Elon Musk to found Neuralink, with plans for an electrode-to-neuron-based brain-computer interface. Juniper Research believes these Brain Machine Interfaces &#8212; devices that connect computers to the brain &#8212; will reach 25.6 million by 2030. Neuralink is hoping to one day build a device with AI that people could access with their thoughts, and ultimately achieve a symbiosis with AI. He has said this would allow humans to reach higher levels of cognition and give them a better shot at competing against AGI. The result will be the next generation of humans, the transhuman. Or perhaps The Borg. In other words, if you can’t beat them, join them.</p>
<p>The post <a href="https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/">Humans&#8217; Fascination with Artificial General Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/humans-fascination-with-artificial-general-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Human Rights without humans: The final line between artificial and superhuman intelligence</title>
		<link>https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 29 Oct 2018 06:30:24 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[Intelligence enhancement]]></category>
		<category><![CDATA[Super intelligence]]></category>
		<category><![CDATA[Technological Singularity]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3055</guid>

					<description><![CDATA[<p>Source- thehill.com Human intelligence precedes civilization; artificial and superhuman intelligence, however, will redefine it. Current research in artificial general intelligence (AGI) and intelligence enhancement (IE) seek to remove <a class="read-more-link" href="https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/">Human Rights without humans: The final line between artificial and superhuman intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="https://thehill.com/opinion/technology/413464-human-rights-without-humans-the-final-line-between-artificial-and" target="_blank" rel="noopener">thehill.com</a></p>
<p>Human intelligence precedes civilization; artificial and superhuman intelligence, however, will redefine it. Current research in artificial general intelligence (AGI) and intelligence enhancement (IE) seek to remove human error from their most ambitious technological quests. On the one hand, using evolutionary algorithms, AGI aims to develop a fully automated, increasingly independent, gradually cognitive, and eventually conscious artificial being. On the other hand, using neurotechnology, IE intends to create a super-intelligent and inherently different human being capable to counteract the inexorable ascension of machines in the next few years.</p>
<p>But what is the limit of such scientific enterprises? If we develop a conscious artificial being or a super-intelligent human being, what rights then prevail: human rights, artificial- or superhuman- rights? How far should we go to satisfy our intellectual curiosity, our ability to innovate, or other less noble yet often prevailing reasons such as productivity, greed, or power?</p>
<p>Has the time come to develop – in addition to individual human rights (e.g., equality, liberty, human dignity) – a new generation of collective human rights (e.g., equal technological access, human-life preservation, reciprocal income equality, brain-privacy) directed at protecting humans from humans, humans from superhumans, and humanity from extinction?</p>
<p>Nothing threatens us more than our decisions. Although our technological progress may lead us to think we live in a modern civilized world, the circular development of human society (e.g., going from supporting Nazis in the 30s to supporting Nazis in 2018) reveals a rather regressive and inhuman tendency.</p>
<p>Knowing that machines will eventually replace most humans as factor of production — which will deprive people across the planet of vital sources of income and increase the already exponential gap of income inequality — how is it we seem to prioritize economic factors such as reduction of labor, health care, insurance, and litigation costs over human rights apprehensions while downplaying existential concerns?</p>
<p>For instance, according to a recent report issued by the World Economic Forum, in the next four years, machines are expected to take over up to 42 percent of all tasks currently being performed by humans. Yet responses to this threat go from enhancing brain-capacity and providing universal basic income to reskilling current-and-future work force — this as millions across the world are still training for jobs that will soon disappear.</p>
<p>Evolutionary algorithms bring AI’s greatest risk yet: machine learning from which AGI’s full development may result. In 2014, Professor Steven Hawking warned about this risk by indicating that: &#8220;the development of full artificial intelligence could spell the end of the human race.” Nevertheless, more recently, in an Op-ed published on the Canadian newspaper the Globe and Mail, cognitive psychologist and Harvard Professor, Steven Pinker, dismissed AI risks to humanity as “apocalyptic thinking.”</p>
<p>Although Pinker’s larger argument is reasonable, it suffers from a major fatal flaw: his risk assessment on AI technology hinges on the human not machine cognitive behavior. Pinker’s analysis focuses mainly on human nature, bias, and decision-making processes; factors that may become gradually exogenous in AGI’s evolutionary algorithms and self-learned cognitive functions.</p>
<p>As MIT Professor Erik Brynjolfssen explains, AI machine learning provides machines with sometimes a million-fold improvement in their performance enabling them to solve problems on their own. That is, outside human supervision, despite human nature and, ergo<em> </em>the risk.</p>
<p>AGI appears also increasingly juxtaposed to human functionality. In fact, self-driving cars, trucks, trains, boats, and planes as well as customer service, bartender, waiter, firefighter, police, mower, farmer, chef, dentist, medical assistant, lawyer, and journalist robots are being introduced as cost-efficient and more reliable options.</p>
<p>Make no mistake, it is not about whether these technologies should or will develop — particularly when, in many ways, they already have. It is about a defining balance society must make between its scientific and economic ambitions and its existential reasons.</p>
<p>Notwithstanding pundits’ estimations, we cannot really predict how fast and what such intelligences will learn, not when our prime goal was to create independent and improved intelligences. After all, is not that the very risk we are assuming?</p>
<p>The post <a href="https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/">Human Rights without humans: The final line between artificial and superhuman intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/human-rights-without-humans-the-final-line-between-artificial-and-superhuman-intelligence/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>AI Vs AGI: What&#8217;s The Difference?</title>
		<link>https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/</link>
					<comments>https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 18 Sep 2018 05:14:22 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[Difference]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[superintelligence]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2884</guid>

					<description><![CDATA[<p>Source- forbes.com In today&#8217;s society, it can be hard to operate without relying on technology one way or another. Electronics have become an essential part of our daily <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/">AI Vs AGI: What&#8217;s The Difference?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- forbes.com</p>
<p class="speakable-paragraph">In today&#8217;s society, it can be hard to operate without relying on technology one way or another. Electronics have become an essential part of our daily operations. It seems we all use technology for productivity and communication.</p>
<p>Can you imagine what would happen if we all stopped relying on technology all of a sudden? The world would be chaos at first, which further proves how much society depends on technological innovation.</p>
<p>One of these innovations revolves around artificial intelligence (AI). Though it used to only be in science fiction novels, AI is now a true venture for many businesses of today, including my own. In addition, much research is also being done regarding artificial general intelligence (AGI, or general AI), which is a more specific branch.</p>
<div id="article-0-inread"> What, though, are the exact differences between the two subjects? This article will explore the separation between AI and the heavier AGI.</div>
<div></div>
<p><strong>A Lot Of Research And Development Still Needs To Be Done</strong></p>
<p>Before we dive too deep into AI, it&#8217;s important to note that this is still a new field of research. Scientists and AI experts everywhere are still developing the best programs and innovations they can think of. It might be a long time before we reach the &#8220;end&#8221; of AI development.</p>
<p>The good news is that many businesses are taking advantage of the developments already made. As a matter of fact, 72% of business leadersconsider AI development as an essential part of their business&#8217;s future success.</p>
<p>Since the subject is still new, some definitions are still fluid to an extent. When we talk about AI, for example, many experts would include AGI in the category of AI. Others, though, would claim there is a distinct difference.</p>
<p>It might be easy to think about AI as a broad field, while AGI is a more specific focus within it. General AI applies some of the same concepts, even. Below are the two distinctly separate definitions that the industry has come to generally accept.</p>
<p><strong>AI Is Based On Human Cognition</strong></p>
<p>Many would argue that AI itself is centered around performing cognitive tasks that every human can perform. These tasks include things like predictive marketing or complex calculations. Sure, a human could perform them, but allowing machine learning to sift through data on our behalf saves us valuable thinking power.</p>
<p>In fact, many businesses are starting to incorporate AI innovations. What&#8217;s one of the top reasons they&#8217;re now considering the technology? Well, most of them agree that possibilities in marketing could be perfect for AI technology.</p>
<p>AI, in essence, is designed to make life easier for humans in their daily lives. This design is programmed to be useful from the outset.</p>
<p>In other words, AI functions are preprogrammed beforehand. The &#8220;decisions&#8221; machine learning makes are logical ones based on empirical data. The goal of general AI, though, is to take these decisions a step further.</p>
<p><strong>General AI Is Based On Human Intellectual Ability</strong></p>
<p>General AI might be considered to fall under the umbrella of AI as a whole. It&#8217;s sometimes referred to as strong AI or strict AI. That&#8217;s because general AI expects the machine to be equally as smart as a human.</p>
<p>General AI would expect a machine to perform functions that are now only seen in science fiction robots. We don&#8217;t have a machine available, for example, that could walk into a home and do laundry for the entire household.</p>
<p>The number of decisions and intellectual energy require are still too far-fetched. Sure, a machine might be able to locate laundry baskets and sort the clothes by color. What about random clothing items that were thrown around a teenage boy&#8217;s untidy room, though? Or, how would the machine know which items are only for dry-cleaning? Some decisions that humans take for granted would overwhelm a simple machine&#8217;s mind.</p>
<p>Another case would be a decision in which &#8220;human instinct&#8221; comes into play. For example, sometimes we go with our &#8220;gut&#8221; to determine which food product to purchase at the store. A machine might not care about a brand name as much as the lowest priced item.</p>
<p>In other words, if it can&#8217;t be directly programmed into a machine, odds are that it won&#8217;t be able to make heavy intellectual decisions. This ability still is reserved for the part within all of us that is &#8220;human.&#8221;</p>
<p><strong>Don&#8217;t Forget About Superintelligence</strong></p>
<p>There is yet another category under AI as a whole that might be of interest. This would be &#8220;superintelligence,&#8221; which is also only a part of science fiction still.</p>
<p>Such superintelligence is more of a general fear of those who don&#8217;t fully understand the limits of real AI technology. These people are concerned that AI could someday surpass all human intelligence. While it makes for a great adventure movie, superintelligence is not at present a realistic concern for experts.</p>
<p><strong>How Can AI Or General AI Benefit Businesses Today?</strong></p>
<p>As mentioned above, many business leaders are starting to appreciate the possible applications of AI. Since the field is still fresh, no one knows just to what extent those applications could assist us.</p>
<p>Humanity has always been optimizing and automating business operations to reduce corporations&#8217; bottom lines. As this displacement of the workforce might be frightening, it still opens up endless productive possibilities for everyone.</p>
<p>Technology and innovation deserve to be given a fighting chance to truly benefit humanity. A solid understanding of AI is beneficial for all professionals these days. Some professionals dedicated to AI and its progress continue to push for the spread of this exciting technology.</p>
<p><strong>Stay Informed About Technology And AI Innovations</strong></p>
<p>Such a broad field of research deserves to be thoroughly explored for the benefit of humanity. All kinds of perspectives and expertise could expand the possibilities of general AI innovation. It&#8217;s important to stay informed and updated on the progress so you don&#8217;t get left behind in the modern business world.</p>
<p>Continue researching and learning about AI and technology. The potential applications of the field might end up benefiting your ventures someday.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/">AI Vs AGI: What&#8217;s The Difference?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-vs-agi-whats-the-difference/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>The AI of science fiction just got one step closer</title>
		<link>https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/</link>
					<comments>https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 03 Nov 2017 05:37:03 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[CAPTCHA]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1627</guid>

					<description><![CDATA[<p>Source &#8211; washingtonpost.com Major websites all over the world use a system called CAPTCHA to verify that someone is indeed a human and not a bot when entering <a class="read-more-link" href="https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/">The AI of science fiction just got one step closer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>washingtonpost.com</strong></p>
<p>Major websites all over the world use a system called CAPTCHA to verify that someone is indeed a human and not a bot when entering data or signing into an account. CAPTCHA stands for the “Completely Automated Public Turing test to tell Computers and Humans Apart.” The squiggly letters and numbers, often posted against photographs or textured backgrounds, have been a good way to foil hackers. They are annoying but effective.</p>
<p>The days of CAPTCHA as a viable line of defense may, however, be numbered.</p>
<p>Researchers at Vicarious, a Californian artificial intelligence firm funded by Amazon founder (and Washington Post owner) Jeffrey P. Bezos and Facebook’s Mark Zuckerberg, have just published a paper documenting how they were able to defeat CAPTCHA using new artificial-intelligence techniques. Whereas today’s most advanced AI (artificial intelligence) technologies use neural networks that require massive amounts of data to learn from, sometimes millions of examples, the researchers said their system needed just five training steps to crack Google’s reCAPTCHA technology. With this, they achieved a 67 percent success rate per character — reasonably close to the human accuracy rate of 87 percent. In answering PayPal and Yahoo CAPTCHAs, the system achieved an accuracy rate of greater than 50 percent.</p>
<p>The CAPTCHA breakthrough came hard on the heels of another major milestone from Google’s DeepMind team, the people who built the world’s best Go-playing system. DeepMind built a new artificial-intelligence system called AlphaGo Zero that taught itself to play the game at a world-beating level with minimal training data, mainly using trial and error — in a fashion similar to how humans learn.</p>
<p>Both playing Go and deciphering CAPTCHAs are clear examples of what we call narrow AI, which is different from Artificial General Intelligence (AGI) —  the stuff of science fiction. Remember R2-D2 of “Star Wars,” Ava from “Ex Machina” and Samantha from “Her?” They could do many things and learned everything they needed on their own.</p>
<p>The narrow AI technologies are systems that can only perform one specific type of task. For example, if you asked AlphaGo Zero to learn to play Monopoly, it could not, even though that is a far less sophisticated game than Go; if you asked the CAPTCHA cracker to learn to understand a spoken phrase, it would not even know where to start.</p>
<p>To date, though, even narrow AI has been difficult to build and perfect. To perform very elementary tasks such as determining whether an image is of a cat or a dog, the system requires the development of a model that details exactly what is being analyzed and massive amounts of data with labeled examples of both. The examples are used to train the AI systems, which are modeled on the neural networks in the brain, in which the connections between layers of neurons are adjusted based on what is observed. To put it simply, you tell an AI system exactly what to learn, and the more data you give it, the more accurate it becomes.</p>
<p>The methods that Vicarious and Google used were different; they allowed the systems to learn on their own, albeit in a narrow field. By making their own assumptions about what the training model should be and trying different permutations until they got the right results, they were able to teach themselves how to read the letters in a CAPTCHA or to play a game.</p>
<p>This blurs the line between narrow AI and AGI and has broader implications, in robotics and in virtually any other field in which machine learning in complex environments may be relevant.</p>
<p>Beyond visual recognition, the Vicarious breakthrough and AlphaGo Zero success are encouraging scientists to think about how AIs can learn to do things from scratch. And this brings us one step closer to coexisting with classes of AIs and robots that can learn to perform new tasks that are slight variants on their previous tasks — and ultimately the AGI of science fiction.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/">The AI of science fiction just got one step closer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-ai-of-science-fiction-just-got-one-step-closer/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>A physicist explores the future of artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 03 Aug 2017 07:57:11 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artificial general intelligence]]></category>
		<category><![CDATA[computer scientist]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[physicist explores]]></category>
		<category><![CDATA[wondrous technological]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=457</guid>

					<description><![CDATA[<p>Source &#8211; sciencemag.org Whether it’s reports of a new and wondrous technological accomplishment or of the danger we face in a future filled with unbridled machines, artificial intelligence <a class="read-more-link" href="https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/">A physicist explores the future of artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; sciencemag.org</p>
<p>Whether it’s reports of a new and wondrous technological accomplishment or of the danger we face in a future filled with unbridled machines, artificial intelligence (AI) has recently been receiving a great deal of attention. If you want to understand what the fuss is all about, Max Tegmark’s original, accessible, and provocative <em>Life 3.0: Being Human in the Age of Artificial Intelligence</em> would be a great place to start.</p>
<p>The book’s goal is not to tell us what being human will look like in the years ahead, as the title might seem to suggest, but rather to give us the background necessary to understand where technology might lead the human species. In this it succeeds, bringing well-timed clarity to the sometimes muddled public view of AI that has emerged over the past few years.</p>
<p>When computer scientist John McCarthy gave the field its name in 1955, AI’s scholars grappled with the tantalizing prospect that computers might have the capacity to demonstrate broad human-level intelligence, something that is now increasingly called “artificial general intelligence” (AGI). Achieving AGI, however, proved difficult, and researchers were forced to strategically target more narrow tasks, focusing on problems such as understanding images, interacting with natural language, manipulating objects in the physical world, learning, and even playing games. The timeliness of <em>Life 3.0</em>arises from the unprecedented number and range of successes seen in these areas in just the past few years and the ensuing publicity these successes have generated.</p>
<p>Recent depictions of the future of AI run the gamut from benevolent machines letting people live lives of leisure to nightmarish slaughterers of the human race. Part of this dichotomy may come from the fact that not everyone means the same thing when they refer to AI. Some are focused on today’s progress and the potential implications of AI automation (see, for example, Erik Brynjolfsson and Andrew McAfee’s <em>The Second Machine Age</em>). Others, however, are talking about AGI (as in Nick Bostrom’s <em>Superintelligence</em>).</p>
<p>Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements <em>The Second Machine Age</em>’s economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in <em>Superintelligence</em>.</p>
<p>Tegmark begins by laying out the range of perspectives currently found among those working in the field. He showcases, especially, the increasingly mainstream view that we should be thinking more deeply about the societal implications of what we create and how we might ultimately design and build AI systems that reflect and respect our hopes and values.</p>
<p><em>Life 3.0</em> focuses both on the short-term status of AI and on AGI and the longer-term outlook, projecting from tens or hundreds of years to tens of thousands of years to billions of years ahead. Along the way, Tegmark gives us a physicist’s take on intelligence and computation, the origin and nature of goal-oriented behavior and its implications for AGI, the nature of consciousness and what it might mean for AGI, and—veering from the book’s main focus on AI—what limits physics might impose on our future.</p>
<p><em>Life 3.0</em> is interlaced with these and many other thought-provoking ideas. For example, are feelings a consequence of the universe maximizing its entropy? Just as computers work on a “substrate” of 0s and 1s independent of their precise physical implementation in the machine, is intelligence similarly “substrate-independent,” with the potential for implementation not just in biological neurons but also computer hardware? Would intelligent machines that design and build new iterations of themselves represent a new form of “life” (the “Life 3.0” Tegmark refers to in the book’s title)? The book is also populated with numerous science fiction–worthy futures—both good and bad—built on a range of optimistic and often provocative projections of where technology may go and may lead us.</p>
<p>At one point, Tegmark quotes Emerson: “Life is a journey, not a destination.” The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/">A physicist explores the future of artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-physicist-explores-the-future-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
