<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>nuclear stability Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/nuclear-stability/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/nuclear-stability/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 01 May 2018 06:28:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Will artificial intelligence undermine nuclear stability?</title>
		<link>https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/</link>
					<comments>https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 01 May 2018 06:28:24 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI machines]]></category>
		<category><![CDATA[nuclear stability]]></category>
		<category><![CDATA[nuclear war]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2305</guid>

					<description><![CDATA[<p>Source &#8211; thebulletin.org Artificial intelligence and nuclear war have been fiction clichés for decades. Today’s AI is impressive to be sure, but specialized, and remains a far cry from <a class="read-more-link" href="https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/">Will artificial intelligence undermine nuclear stability?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; thebulletin.org</p>
<p>Artificial intelligence and nuclear war have been fiction clichés for decades. Today’s AI is impressive to be sure, but specialized, and remains a far cry from computers that become self-aware and turn against their creators. At the same time, popular culture does not do justice to the threats that modern AI indeed presents, such as its potential to make nuclear war more likely even if it never exerts direct control over nuclear weapons.</p>
<p>Russian President Vladimir Putin recognized the military significance of AI when he declared in September that the country that leads in artificial intelligence will eventually rule the world. He may be the only leader to have put it so bluntly, but other world powers appear to be thinking similarly. Both China and the United States have announced ambitious efforts to harness AI for military applications, stoking fears of an incipient arms race.</p>
<p>In the same September speech, Putin said that AI comes with “colossal opportunities” as well as “threats that are difficult to predict.” The gravest of those threats may involve nuclear stability—as we describe in a new RAND publication that outlines a few of the ways in which stability could be strained.</p>
<p>Strategic stability exists when governments aren’t tempted to use nuclear threats or coercion against their adversaries. It involves more than just maintaining a credible ability to retaliate after an enemy attack. In addition to that deterrent, nuclear stability requires assurance and reassurance. When a nation extends a nuclear security guarantee to allies, the allies must be assured that nukes will be launched in their defense even if the nation extending the guarantee must put its own cities at risk. Adversaries need to be reassured that forces built up for deterrence and to protect allies will not be used without provocation. Deterrence, assurance, and reassurance are often at odds with each other, making nuclear stability difficult to maintain even when governments have no interest in attacking each other.</p>
<p>In a world where increasing numbers of rival states are nuclear-armed, the situation becomes almost unmanageable. In the 1970s, four of the five declared nuclear powers primarily targeted their weapons on the fifth, the Soviet Union (Beijing, after its 1969 border clashes with the Soviet Union, feared Moscow much more than Washington). It was a relatively simple bilateral stand-off between the Bolsheviks and their many adversaries. Today, nine nuclear powers are entangled in overlapping strategic rivalries—including Israel, which has not declared the nuclear arsenal that it is widely believed to possess. While the United States, the United Kingdom, and France still worry about Russia, they also fret about an increasingly potent China. Beijing’s rivals include not just the United States and Russia but India as well. India fears China too, but primarily frets about Pakistan. And everyone is worried about North Korea.</p>
<p>In such a complex and dynamic environment, teams of strategists are required to navigate conflict situations—to identify options and understand their ramifications. Could AI make this job easier? With AI now beating human professionals in the ancient Chinese strategy game Go, as well as in games of bluffing such as poker, countries may be tempted to build machines that could “sit” at the table amid nuclear conflicts and act as strategists.</p>
<p>Artificially intelligent machines may prove to be less error-prone than humans in many contexts. But for tasks such as navigating conflict situations, that moment is still far off in the future. Much effort must be expended before machines can—or should—be relied on for consistent performance of the extraordinary task of helping the world avoid nuclear war. Recent research suggests that it is surprisingly simple to trick an AI system into reaching incorrect conclusions when an adversary gets to control some of the inputs, such as how a vehicle is painted before it is photographed.</p>
<p>But AI could undermine the foundations of nuclear stability through means other than providing advice to strategists. Sensors and cameras are increasing in number throughout the world; AI’s growing ability to make predictions based on information from these disparate sources may cause nations to worry that the missiles and submarines they depend upon for assured retaliation will become vulnerable. During the Cold War, the superpowers sought crippling “first-strike” capabilities, but this was a perilous strategy—each superpower became convinced that the other might launch a disarming strike against it. With retaliation prevented, whoever struck first would gain a huge advantage. Thus the chances of accidental nuclear war were greatly increased. Such challenges are even more fraught in today’s world. More states are nuclear-armed—and AI technology might lend extra credibility to threats against nuclear retaliatory forces.</p>
<p>In the coming years, AI-enabled progress in tracking and targeting adversaries’ nuclear weapons could undermine the foundations of nuclear stability; that is, nations may question whether their missiles and submarines are vulnerable to a first strike. Will AI someday be able to guide strategy decisions about escalation or even launching nuclear weapons? Such capabilities are off in the distance for now, but the chance that they will eventually emerge is real—as is the need to understand, right now, how AI could reshape the world’s approach to nuclear stability.</p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/">Will artificial intelligence undermine nuclear stability?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/will-artificial-intelligence-undermine-nuclear-stability/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>By 2040, artificial intelligence could upend nuclear stability</title>
		<link>https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/</link>
					<comments>https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 26 Apr 2018 05:41:07 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Humans intelligence]]></category>
		<category><![CDATA[nuclear security]]></category>
		<category><![CDATA[nuclear stability]]></category>
		<category><![CDATA[nuclear war]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2274</guid>

					<description><![CDATA[<p>Source &#8211; sciencedaily.com While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take <a class="read-more-link" href="https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/">By 2040, artificial intelligence could upend nuclear stability</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; sciencedaily.com</p>
<p>While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.</p>
<p>During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.</p>
<p>The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.</p>
<p>Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.</p>
<p>&#8220;The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history,&#8221; said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. &#8220;Much of the early development of AI was done in support of military efforts or with military objectives in mind.&#8221;</p>
<p>He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s that sought to use AI to translate reconnaissance data into nuclear targeting plans.</p>
<p>Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.</p>
<p>Researchers say that given future improvements, it is possible that eventually AI systems will develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term.</p>
<p>&#8220;Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,&#8221; said Andrew Lohn, co-author on the paper and associate engineer at RAND. &#8220;There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.&#8221;</p>
<p>RAND researchers based their perspective on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security.</p>
<p>The post <a href="https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/">By 2040, artificial intelligence could upend nuclear stability</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/by-2040-artificial-intelligence-could-upend-nuclear-stability/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
