<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>human development Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/human-development/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/human-development/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 23 Oct 2017 06:27:28 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Human Learning: Beyond the Panopticon</title>
		<link>https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/</link>
					<comments>https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Oct 2017 06:27:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[human development]]></category>
		<category><![CDATA[human learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1532</guid>

					<description><![CDATA[<p>Source &#8211; huffingtonpost.com In the quest to personalize your experience with the latest technologies, and as a reward for your dedicated participation in signing up for new services <a class="read-more-link" href="https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/">Human Learning: Beyond the Panopticon</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>huffingtonpost.com</strong></p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>In the quest to personalize your experience with the latest technologies, and as a reward for your dedicated participation in signing up for new services without reading the fine print in their consent agreements, the powers that be have a special gift for you. Yes, <strong>it’s your own personal panopticon!</strong> Previously only reserved for fiction writers, today you can have dystopia delivered — with free shipping!</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>They say it helps to laugh, but this isn’t really that funny anymore, with a sense of uneasiness beginning to build as the realities of Generation Tech start to set in for the long haul. The devices we perpetually use and that connect almost everything around us aren’t going anywhere, but every byte of data they collect is likely going <em>somewhere</em>. Is it in the cloud? A massive data storage facility? Individual dossiers?</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Wherever our data goes, we pretty much know it’s being compiled. Companies use this information to build detailed profiles for marketing purposes, governments can potentially use it for enforcement, and others from insurers to employers can access much of it through garden-variety online searches. Our personal lives our increasingly being laid bare, often blithely <strong>dismissed as the cost of doing business</strong>.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Indeed, we’ve all probably had this experience by now: you search for something that you’ve never searched for before — maybe a household appliance or a healthcare provider in your area. Suddenly and seamlessly, ads for those services and related products begin popping up in banners and sidebars; you might even get solicitations on your other devices. And that’s only from conducting one simple search.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Extrapolating further, and drawing from routine revelations about hacking and backdoors, it appears that the depths of data mining are expanding all the time. With each new voice-activated ‘assistant’ or IoT-connected gadget, access to our private domains is being pried open more and more. No warrants need to be issued for doing this, nor is there any oversight committee; <strong>this all happens with consent</strong>.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>And it’s just getting started in earnest. Soon enough, if not already at hand, there will be a record of every conversation you have, every keystroke you enter, every transaction you make, every person you interact with, every place you go, and everything you watch, listen to, like, and purchase. This will all be promoted as bringing greater convenience, promising security and mobility, and encouraging ‘sharing’.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>Such observations are almost passé by now, seen as a downer at best or alarmist at worst. But the full implications are worth considering, even as the sense of resignation to the inevitable becomes almost palpable. As <em>New York Times</em> tech columnist Farhad Manjoo recently lamented, “Technology has crossed over to the dark side. It’s coming for you; it’s coming for us all, and we may not survive its advance.”</p>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>If our lives are an open book, what becomes of privacy? And perhaps more to the point: <strong>without privacy, what becomes of human development?</strong> Some may say they’re doing nothing wrong and have nothing to hide, but our rights weren’t designed to protect only the pure. A healthy society requires functional individuals; this includes spaces of autonomy, exploration, reflection, expression, and more.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>More pointedly, how many of us can really say that our lives could withstand such an unprecedented level of total exposure? We spend a lot of time cultivating complex personas, engaging in “impression management,” building faces to the world that reflect our personal images and aspirations. We have ethical ideals, spiritual frameworks, and emotional cores. And we also have things we keep to ourselves.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>This is natural, and it’s why privacy exists. Having unknown entities (or just anyone with a computer) <strong>peer through digital windows into our very being</strong> is a perverse form of high-tech voyeurism. The fact that access is often freely given doesn’t negate the responsibility of those collecting, storing, mining, and deploying the data being gleaned. Before considering alternatives, some implications are worth noting:</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p><em>Manipulation</em>: We already know what this looks like, since it’s often done openly. Our digital footprints are regularly used for marketing purposes, to tailor ads to our desires and information to our tastes. We’ve also seen a darker side, as with the propagation of “fake news” (the <em>real</em> fake news, not the <em>fake</em> fake news) and the deployment of targeted persuasion for political purposes. And there’s more to come.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p><em>Coercion</em>: Maybe you’ve seen the <em>Black Mirror</em> episode where people with secrets and repugnant habits are blackmailed to engage in horrific behaviors? Imagine this playing out in more ordinary terms, less to make people do awful things than to lead them into deeper modes of obedience. In fact, the panopticon itself was conceived as a space of coerced conduct through constant surveillance, as the ultimate prison.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p><em>Control</em>: And thus we reach the dystopian horizon of the panopticon, commensurate with the Orwellian tendencies already in evidence. Couched in the rhetoric of convenience and access, a web of technology that tracks our every impulse is fraught with implications for social control. Aptly, the lyric that “every step you take, I’ll be watching you” was intoned by <em>The Police</em> — released in 1983, but very much 1984.</p>
</div>
<div class="content-list-component bn-content-list-text text" data-beacon="{&quot;p&quot;:{&quot;mnid&quot;:&quot;citation&quot;}}" data-beacon-parsed="true">
<p>There aren’t easy answers to these concerns. Perhaps if the societal ethos moved toward “watching the watchers” rather than simply yielding to total surveillance, things may improve. More oversight as to what’s collected and who has access is crucial, as are clearly marked rights and remedies. We might even demand technology that <em>expands</em> our privacy, rather than leveraging it for someone else’s gain.</p>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/">Human Learning: Beyond the Panopticon</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/human-learning-beyond-the-panopticon/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Will Artificial Intelligence Be the Last Human Invention?</title>
		<link>https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/</link>
					<comments>https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 07 Sep 2017 07:19:20 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[human development]]></category>
		<category><![CDATA[Human Invention]]></category>
		<category><![CDATA[robotics experts]]></category>
		<category><![CDATA[supermachines]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=996</guid>

					<description><![CDATA[<p>Source &#8211; dailyutahchronicle.com In Plato’s “Phaedrus,” Socrates tells the legend of King Thamus who is given the gift of writing from Theuth, the Egyptian god of knowledge. Writing, <a class="read-more-link" href="https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/">Will Artificial Intelligence Be the Last Human Invention?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>dailyutahchronicle.com</strong></p>
<p>In Plato’s “Phaedrus,” Socrates tells the legend of King Thamus who is given the gift of writing from Theuth, the Egyptian god of knowledge. Writing, Theuth persuades, will act as a mass remedy for memory loss, allowing information to be more easily remembered and stored. No longer will people have to rely on oral tradition to learn and pass on information. King Thamus is unconvinced and argues writing will in fact have the opposite effect, that writing will lead to laziness, not enlightenment. Instead of internalizing information, younger generations would rely on notes, books, reminders and otherwise externalized forms of knowledge to formulate the illusion of knowledge. To King Thamus, the technology of writing is a threat to the prosperity of human civilization.</p>
<p>Centuries have passed, and it seems King Thamus was wrong: It will not be writing that degrades or destroys humanity after all. The Egyptian leader could not have foreseen hydrogen bombs capable of escalating nuclear warfare to the point of eradicating every lifeform on earth or industrial levels of carbon emissions predicted to drastically alter the planet’s climate. He didn’t know about biological agents, that in the wrong hands, could wipe out entire countries or populations at a time. While Thamus was worrying about writing, the rest of humanity was busy cooking up newer, deadlier technologies capable of existential destruction far beyond the ancient ruler’s comprehension.</p>
<p><b>The real threats</b></p>
<p>If not writing, then what? In most contemporary academic and political circles, the two biggest threats discussed are climate change and nuclear warfare. With the current White House administration demonstrating unprecedented hostility towards global climate agreement matched only by its apparent willingness to use nukes if necessary, both of these threats are timely and worthy of concern. There is another threat, however, that has received disproportionately little attention and contemplation. Artificial intelligence — AI, superintelligence, autonomous robots — could spiral out of control and drastically alter the world as we know it, but that thinking was typically left for the likes of H.G. Wells and Michael Crichton. Many researchers, scientists and philosophers, however, are convinced the threat of AI is more than science fiction.</p>
<p>On Aug. 20, a group of AI innovators and robotics experts, including Tesla’s Elon Musk and Mustafa Suleyman of Google DeepMind, penned an open letter to the United Nations calling for a complete ban on autonomous weapons.</p>
<p>“As companies building the technologies in Artificial Intelligence and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the letter reads. “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”</p>
<p>But autonomous killer robots are just the tip of the technological iceberg that is artificial intelligence. Automated weapons, while terrifying to think about in the hands of unstable regimes, still rely on human-coded algorithms. Like Deep Blue, an IBM-developed computer that beat the world chess champion in 1997, these weapons, which are being developed today in the form of pilotless planes and military combat drones, are subordinate to human programming and development.</p>
<p><b>Intelligence explosion</b></p>
<p>The real worry of AI critics is a superintelligence runoff that spirals beyond human comprehension or control. In academic circles, it is known as an intelligence explosion or “the singularity,” a term coined in 1983 by sci-fi writer Vernor Vinge. Based on the idea that computer processing speed doubles over regular intervals of time, it is easy to imagine computer processing reaching an exponential rate in which programming outpaces human understanding. The argument goes like this: One day, be it next year, next decade or 50 years from now, researchers will develop a machine smarter than any human. As a machine of superior intelligence, it will be better at programming itself than its creators. As it improves itself, it will get smarter and eventually create a machine smarter than itself. Then, that machine improves itself and creates something even smarter. Then the next machine, then the next. Then, intelligence explosion.</p>
<p>While the singularity argument is logically sound, it is little more than a thought experiment that relies on crucial philosophical assumptions, like that sentient, humanlike intelligence is something that can even be programmed into a machine. Sure, there are calculators that can compute numbers and functions faster than the human brain, but can a computer program decide whether it is ethical to engage in war? Or to topple an oppressive dictator? Not everyone is convinced.</p>
<p>“It’s not obvious that a computer can capture what a human brain does,” said Thomas C. Henderson, a professor in the University of Utah school of computing. There are hypotheses that such a code could be developed down the line, but “nobody knows if they’re really true.”</p>
<p>Henderson’s research is aimed at understanding robot cognition and developing computer programs that understand and do things in the real world: walking, driving, making motions. In his lab sits a number of automated machines, including unmanned aerial vehicles and small wheeled rovers. He is hopeful about developments in AI technology, but believes popular perceptions about the technology are based largely in film and media.</p>
<p>“There is always this notion that we can create robots that can even walk, that can somehow resemble human-type capabilities,” Henderson said. “It’s much tougher than it seems, building mechanisms that work well and are robust. Nobody’s really figured that out yet.”</p>
<p>Whether or not AI technology is good or bad depends on who uses it, Henderson says. He believes there is “a lot of straightforward benefit to most people.” There is, however, “good potential for abuse,” Henderson admits.</p>
<p>“I think a lot of good can come from it, because AI techniques could be used to help figure out how to keep the power grid from going down, so that’s good,” Henderson said. “But it can also be used to take down the power grid.”</p>
<p><b>Regulations</b></p>
<p>Henderson hasn’t given much thought to the notion of regulating AI, but he believes laws will start being implemented on a case-by-case basis. He uses speed limits as an analogy. As a society, no one really thought about regulating highway speeds until cars were advanced enough to justify doing so.</p>
<p>“You only impose speed limits once you build cars that can go faster than is safe,” he said. “I think regulations will follow the implementations of technology.”</p>
<p>As improvements are made to self-driving cars, like Google’s Waymo or Tesla’s Autopilot, Henderson said governmental agencies are going to have to start finding ways to regulate the technology.</p>
<p>When speaking with Henderson, it is immediately obvious he does not view AI in the immediately threatening way that people like Musk or Nick Bostrom. Bostrom is an Oxford philosopher and researcher of superintelligence who has been at the forefront of AI criticism.</p>
<p>“There seems to always be an issue of how theory and technology are applied,” Henderson said. “[For example], there are good and bad applications of nuclear technology. A lot of things [that] come out of that are quite useful, and then some things are kind of dangerous.”</p>
<p>In the end, it all depends on “how people are exploiting it,” according to Henderson.</p>
<p>With its sensational discussions about sentry-yielding war machines and maleficent supermachines, it is easy to leave criticisms of AI in comic books, graphic novels and purely conceptual philosophical discussions. Looking at the history of human development, however, defined by agricultural, industrial and technological revolutions, it seems unfeasible developments in artificial intelligence won’t drastically alter the world.</p>
<p>Superintelligence has the potential of saving the world by supplying enough physical and intellectual labor to allow humans to live freely. It also has the potential of spiralling into something drastically beyond our comprehension or control, and unless its interests perfectly coincide with our own, we should be worried. The greatest takeaway from the threat of runoff technology comes from the genre of science fiction.</p>
<p>In Crichton’s “Jurassic Park,” Dr. Ian Malcolm chastises the park’s creators for arrogantly thinking they can exert control over their scientific creation. The fact is, we don’t know what artificial intelligence will look like or be capable of, but this fact alone is reason enough to worry.</p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/">Will Artificial Intelligence Be the Last Human Invention?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/will-artificial-intelligence-be-the-last-human-invention/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
	</channel>
</rss>
