<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>A.I. researchers Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/a-i-researchers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/a-i-researchers/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 27 Aug 2018 07:29:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help?</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 27 Aug 2018 07:29:58 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[A.I. researchers]]></category>
		<category><![CDATA[robotic weapons]]></category>
		<category><![CDATA[science and technology]]></category>
		<category><![CDATA[Silicon Valley]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2790</guid>

					<description><![CDATA[<p>Source &#8211; MOUNTAIN VIEW, Calif. — In a May memo to President Trump, Defense Secretary Jim Mattis implored him to create a national strategy for artificial intelligence. <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/">Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;</p>
<p class="css-1i0edl6 e2kc3sl0">MOUNTAIN VIEW, Calif. — In a May memo to President Trump, Defense Secretary Jim Mattis implored him to create a national strategy for artificial intelligence.</p>
<p class="css-1i0edl6 e2kc3sl0">Mr. Mattis argued that the United States was not keeping pace with the ambitious plans of China and other countries. With a final flourish, he quoted a recent magazine article by Henry A. Kissinger, the former secretary of state, and called for a presidential commission capable of “inspiring a whole of country effort that will ensure the U.S. is a leader not just in matters of defense but in the broader ‘transformation of the human condition.’” Mr. Mattis included a copy of Mr. Kissinger’s article with his four-paragraph note.</p>
<p class="css-1i0edl6 e2kc3sl0">Mr. Mattis’s memo, which has not been reported before and was viewed by The New York Times, reflected a growing sense of urgency among defense officials about artificial intelligence. The consultants and planners who try to forecast threats think A.I. could be the next technological game changer in warfare.</p>
<p class="css-1i0edl6 e2kc3sl0">The Chinese government has raised the stakes with its own national strategy. Academic and commercial organizations in China have been open about working closely with the military on A.I. projects. They call it “military-civil fusion.”</p>
<p class="css-1i0edl6 e2kc3sl0">It is not clear what impact, if any, Mr. Mattis’s memo had. Though the White House announced in May — about three weeks before he sent his note — that it would establish a panel of government officials to study A.I. issues, critics say the administration still has not done enough to set federal policy. Officials with the Office of Science and Technology Policy, which would most likely take a leadership role in setting an agenda for A.I., said that A.I. is a national research and development priority and that it is part of the president’s national security and defense strategies.</p>
<p class="css-1i0edl6 e2kc3sl0">Nonetheless, the Pentagon appears to be pushing ahead on its own, looking for ways to strengthen its ties with A.I. researchers, particularly in Silicon Valley, where there is considerable wariness about working with the military and intelligence agencies.</p>
<p class="css-1i0edl6 e2kc3sl0">In late June, the Pentagon announced the creation of the Joint Artificial Intelligence Center, or JAIC. Defense officials have not said how many people will be dedicated to the new program or where it will be based when it starts next month. It could have several offices around the country.</p>
<p class="css-1i0edl6 e2kc3sl0">The Defense Department wants to shift $75 million of its annual budget into the new office and a total of $1.7 billion over five years, according to a person familiar with the matter who was not allowed to speak about it publicly.</p>
<p class="css-1i0edl6 e2kc3sl0">Known as “the Jake,” the center is billed as a way of facilitating dozens of A.I. projects across the Defense Department. This includes Project Maven, an effort to build technology to identify people and things in video captured by drones that has come to symbolize the ideological gap between the government and Silicon Valley.</p>
<p class="css-1i0edl6 e2kc3sl0">Around the time Mr. Mattis wrote his memo to Mr. Trump, thousands of Google employees were protesting their company’s involvement in Project Maven. After the protests became public, Google withdrew from the project.</p>
<p class="css-1i0edl6 e2kc3sl0">The protests might have been a surprise to Pentagon officials, since big tech companies have been defense contractors for as long as there has been a Silicon Valley. And there is some irony in any industry reluctance to work with the military on A.I., given that research competitions sponsored by an arm of the Defense Department, called Darpa, jump-started work on the technology that goes into the autonomous vehicles many tech companies are now trying to commercialize.</p>
<p class="css-1i0edl6 e2kc3sl0">But in the eyes of some researchers, creating robotic vehicles and developing robotic weapons are very different. And they fear that autonomous weapons pose an unusual threat to humans.</p>
<p class="css-1i0edl6 e2kc3sl0">“This is a unique moment, with so much activism coming out of Silicon Valley,” said Elsa Kania, an adjunct fellow at the Center for a New American Security, a think tank that explores policy related to national security and defense. “Some of it is informed by the political situation, but it also reflects deep concern over the militarization of these technologies as well as their application to surveillance.”</p>
<p class="css-1i0edl6 e2kc3sl0">The Joint Artificial Intelligence Center, officials hope, will help close that gap.</p>
<p class="css-1i0edl6 e2kc3sl0">“One of our greatest national strengths is the innovation and talent found in our private sector and academic institutions, enabled by free and open society,” Brendan McCord, a former Navy submarine officer and an A.I. start-up veteran who will lead the center, said during a public meeting in Silicon Valley last month. “The JAIC will help evolve our partnerships with industry, academia, allies.”</p>
<p class="css-1i0edl6 e2kc3sl0">The center, he added, will work with “traditional and nontraditional innovators alike,” meaning longtime government contractors like Lockheed Martin as well as newer Silicon Valley companies. The Pentagon has worked with more than 20 companies on Project Maven so far, but it hopes to expand this work and overcome the reluctance among workers.</p>
<p class="css-1i0edl6 e2kc3sl0">This summer, a Pentagon researcher worked alongside a small but influential Silicon Valley artificial intelligence lab, Fast.ai, on a public effort to build technology capable of accelerating the development of A.I. systems.</p>
<p class="css-1i0edl6 e2kc3sl0">Autonomous systems are based on algorithms that can learn to do things like recognize objects by analyzing vast amounts of data. The Fast.ai project would improve the speed of that A.I. “training.”</p>
<p class="css-1i0edl6 e2kc3sl0">The Pentagon is also offering an olive branch to its Silicon Valley critics. While unveiling the JAIC, Mr. McCord said its focus would include “ethics, humanitarian considerations, and both short-term and long-term A.I. safety.”</p>
<p class="css-1i0edl6 e2kc3sl0">It was an important step toward reaching détente with A.I. researchers, said Sophie-Charlotte Fischer, a researcher at Center of Security Studies at ETH Zurich University in Switzerland who specializes in the relationship between the tech industry and government. “There needs to be a clear understanding of what it means to develop and deploy these A.I. technologies,” she said.</p>
<p class="css-1i0edl6 e2kc3sl0">Will it be enough? Skeptics want to see the details. “So far, the plans remain very abstract,” Ms. Fischer said. “What kind of systems do they want to allow? Do they want to attach weapons systems to A.I.?”</p>
<p class="css-1i0edl6 e2kc3sl0">Robert Work, the former deputy secretary of defense who founded Project Maven, worries that the Google protest has skewed the perception of the project, which does not yet involve lethal weapons, and stunted public discussion of how military technology should evolve.</p>
<p class="css-1i0edl6 e2kc3sl0">“We need to have an open debate about A.I. and its consequences and hear arguments from all sides,” he said in a recent interview.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/">Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-is-now-a-pentagon-priority-will-silicon-valley-help/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>How to Regulate Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 04 Sep 2017 11:40:57 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[A.I. harm]]></category>
		<category><![CDATA[A.I. researchers]]></category>
		<category><![CDATA[A.I. science]]></category>
		<category><![CDATA[laws of robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=941</guid>

					<description><![CDATA[<p>Source &#8211; nytimes.com The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/">How to Regulate Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>nytimes.com</strong></p>
<p class="story-body-text story-content" data-para-count="499" data-total-count="499">The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential threat to humanity,” an alarmist view that confuses A.I. science with science fiction. Nevertheless, even A.I. researchers like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop A.I. at all.</p>
<p class="story-body-text story-content" data-para-count="575" data-total-count="1074">I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.</p>
<p class="story-body-text story-content" data-para-count="511" data-total-count="1585">I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.</p>
<p class="story-body-text story-content" data-para-count="186" data-total-count="1771">These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.</p>
<div class="story-body-supplemental">
<div class="story-body story-body-1">
<p class="story-body-text story-content" data-para-count="473" data-total-count="2244">First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.</p>
</div>
</div>
<div class="story-body-supplemental">
<div class="story-body story-body-2">
<p id="story-continues-2" class="story-body-text story-content" data-para-count="206" data-total-count="2450">Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.</p>
<p id="story-continues-3" class="story-body-text story-content" data-para-count="614" data-total-count="3064">My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford.</p>
<p class="story-body-text story-content" data-para-count="515" data-total-count="3579">My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.</p>
<p class="story-body-text story-content" data-para-count="671" data-total-count="4250">My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the information that your child may inadvertently divulge to a toy such as an A.I. Barbie. Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.</p>
<p class="story-body-text story-content" data-para-count="312" data-total-count="4562" data-node-uid="1">My three A.I. rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Whether or not you agree with Mr. Musk’s view about A.I.’s rate of progress and its ultimate impact on humanity (I don’t), it is clear that A.I. is coming. Society needs to get ready.</p>
<p data-para-count="312" data-total-count="4562" data-node-uid="1">
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/">How to Regulate Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-regulate-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
