<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>reinforces Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/reinforces/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/reinforces/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 14 Jun 2019 09:43:18 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Artificial intelligence reinforces power and privilege</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 14 Jun 2019 09:43:18 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Power]]></category>
		<category><![CDATA[privilege]]></category>
		<category><![CDATA[reinforces]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3817</guid>

					<description><![CDATA[<p>Source:- aljazeera.com What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They&#8217;re all being sifted by some <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/">Artificial intelligence reinforces power and privilege</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- aljazeera.com</p>
<p>What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They&#8217;re all being sifted by some form of artificial intelligence.</p>
<p>Advanced nations and the world&#8217;s biggest companies have thrown billions of dollars behind AI &#8211; a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.</p>
<p>Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI&#8217;s promise from the hot air and decide where to focus concern?</p>
<p>Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term &#8220;artificial intelligence&#8221; in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:</p>
<p>&#8220;At one point, [Minsky] said to me, &#8216;Look, whatever you think about this, just play along, because it gets us funding, this&#8217;ll be great.&#8217; And it&#8217;s true, you know &#8230; in those days, the military was the principal source of funding for computer science research. And if you went into the funders and you said, &#8216;We&#8217;re going to make these machines smarter than people some day and whoever isn&#8217;t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'&#8221;</p>
<div class="teads-adCall"></div>
<p>But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If &#8220;computer says, &#8216;no,'&#8221; as the old joke goes, to whom do you complain?</p>
<p>We&#8217;d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it&#8217;s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we&#8217;re ceding more and more power to algorithms, or rather &#8211; to the people behind them.</p>
<p>Many applications of AI are incredible: we could it to improve wind farms or spot cancersooner. But that isn&#8217;t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people &#8211; and, in particular, grading for various kinds of risk.</p>
<p>As a human rights lawyer doing &#8220;war on terror&#8221; cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney&#8217;s &#8220;one percent doctrine&#8221;? He said that any risk &#8211; even one percent &#8211; of a terror attack would, in the post-9/11 world, to be treated like a certainty.</p>
<p>That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration &#8211; and the shift to a machine learning-driven process in national security, too.</p>
<p>During President Barack Obama&#8217;s drone wars, suspicion didn&#8217;t even need to be personal &#8211; in a &#8220;signature strike&#8221;, it could be a nameless profile, generated by an algorithm, analysing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: &#8220;We kill people based on metadata,&#8221; he said.</p>
<p>Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk &#8211; that is, zero risk for the class of people Lanier describes as &#8220;closest to the biggest computer&#8221; &#8211; is achievable and desirable. This is what is crucial for us all to understand: AI isn&#8217;t just about Google and Facebook targeting you with advertisements. It&#8217;s about risk.</p>
<p>The police in Los Angeles believed it was possible to use machine learning to predict crime. London&#8217;s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.</p>
<p>It used to be common to talk about &#8220;the digital divide&#8221;. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child &#8211; and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.</p>
<p>But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their<em> </em>values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world&#8217;s citizens.</p>
<p>AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.</p>
<p>This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.</p>
<p>When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using &#8220;expression analysis&#8221; as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can affordprivacy and personal human assessment &#8211; and everyone else, who gets number-crunched, tagged, and sorted.</p>
<p>Unless we head off what Shoshana Zuboff calls &#8220;the substitution of computation for politics&#8221; &#8211; where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation &#8211; we risk losing control over our values.</p>
<p>The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?</p>
<p>Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big &#8211; like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?</p>
<p>Everyone has a stake in these questions. Friendly panels and hand-picked corporate &#8220;AI ethics boards&#8221; won&#8217;t cut it. Only by opening up these systems to critical, independent enquiry &#8211; and increasing the power of everyone to participate in them &#8211; will we build a just future for all.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/">Artificial intelligence reinforces power and privilege</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-reinforces-power-and-privilege/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
