<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>algorithmic Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/algorithmic/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/algorithmic/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 01 Apr 2021 09:12:50 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Algorithmic Warfare: Marines Lack Trust in Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 01 Apr 2021 09:12:48 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[algorithmic]]></category>
		<category><![CDATA[Lack]]></category>
		<category><![CDATA[Marines]]></category>
		<category><![CDATA[Warfare]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13841</guid>

					<description><![CDATA[<p>Source &#8211; https://www.nationaldefensemagazine.org/ Before the Marine Corps can fully utilize the power of AI technology and the efficiencies it brings, the service must overcome one major hurdle: <a class="read-more-link" href="https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/">Algorithmic Warfare: Marines Lack Trust in Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.nationaldefensemagazine.org/</p>



<p>Before the Marine Corps can fully utilize the power of AI technology and the efficiencies it brings, the service must overcome one major hurdle: trust.</p>



<p>That’s the message from Commandant Gen. David Berger.</p>



<p>“We’re going to have to trust artificial intelligence,” he said during remarks at the National Defense Industrial Association’s Expeditionary Warfare Conference in February. “We’re not trusting today.”</p>



<p>Whether it’s “sensor-to-shooter or fuel to a frontline unit, we put humans in the loop at about 16 places because we don’t trust it yet,” he said.</p>



<p>The best way to boost confidence in the technology is to have Marines train machines, he said. “Then we’ll trust it.”<br>Brig. Gen. Eric Austin, director of the Marine Corps’ Capabilities Development Directorate, said building that faith in artificial intelligence will unlock its potential.</p>



<p>Service leaders believe the technology will be a key enabler for troops.</p>



<p>“How do we improve the Marine’s ability to understand the environment, make a decision based on what they see and then act, and ensure that those actions are communicated across the force — and do it faster than an adversary?” Berger said. “Some of the technology for doing that already exists.”</p>



<p>Artificial intelligence could assist the service with sifting through large quantities of data to provide commanders with targeted information, he said.</p>



<p>Intelligence information “can be stored, sorted and downloaded from a cloud for our forward deployed forces,” he said.</p>



<p>Austin said intelligence analysis is one of the service’s most mature applications of AI. The Marine Corps is developing tools to process vast amounts of data, provide rapid situational awareness to relieve cognitive burdens and enable Marines to focus on making critical decisions, he said.</p>



<p>The service is also employing artificial intelligence for force protection, he noted. It is currently using the technology for a counter-drone effort to protect forward bases.</p>



<p>The capability is “really neat because it’s a sensor-agnostic approach that provides the inputs through an artificial intelligence framework and leverages algorithms to discriminate threats and offer means to mitigate them, to reduce the burden on operators and … increase the velocity and accuracy of human decisions,” Austin said.</p>



<p>The service is also investing in systems that allow Marines to access data at the tactical edge while operating in denied and degraded environments with limited bandwidth by prioritizing dissemination of the most critical data, he added.</p>



<p>Other key areas of AI development include business processes, support for maintenance missions and improving logistics, he said. It could also be used to inform force development.</p>



<p>The Marine Corps wants to move beyond just the analytics aspect of AI and pursue systems that can truly make recommendations rapidly, he noted.</p>



<p>Austin said the service is on the verge of unprecedented change driven not only by the emergence of new capabilities, but modifications to tactics, techniques and procedures.</p>



<p>“We’ve got to not only realize new capabilities, but we’ve got to know how to use them,” he said. “Part of that comes from just getting these capabilities and these tools in the hands of the Marines and watching them go.”</p>



<p>While the service has AI experts and data scientists on its team, developing the technology is not as simple as just knocking on their door and asking for a system, Austin said.</p>



<p>Artificial intelligence and other emerging technologies such as unmanned systems are ubiquitous across the military’s portfolio of activities and warfighting functional areas, “which adds to the complexity of our approach,” he said.</p>



<p>Key to the way ahead will be to continue to operationalize AI in meaningful and increasingly sophisticated ways, he said. Marines will need to value, understand, field and employ these types of platforms to gain an advantage. The service will need to invest in the science and technology underlying AI systems and test them during experimentation events, Austin said.</p>



<p>The Marine Corps will also need to be open to making mistakes and learning from them as it embarks on an AI-enabled future, he added.</p>



<p>“We’re going to goof it up sometimes. You’re going to fail,” he said.</p>



<p>Ultimately, the technology will be useful across multiple lanes, whether it’s business systems, applications like Joint All-Domain Command and Control — which is being pegged as an internet-of-military-things — or advanced weapon platforms, Austin said.</p>



<p>“We’re just going to learn a lot and find new ways to use it,” he said.</p>



<p>Meanwhile, AI can help with force readiness, Berger said in a recent Washington Post op-ed that he co-authored with Air Force Chief of Staff Gen. Charles “CQ” Brown Jr. with the headline, “To Compete with China and Russia, the U.S. Military Must Redefine ‘Readiness.’”</p>



<p>In the op-ed, the generals argue that readiness “has become synonymous with availability,” and note that this short-term and narrow view is poorly suited for great power competition.</p>



<p>“We propose a new framework for defining readiness, one that better balances today’s needs with those of tomorrow, incorporating elements of current availability, modernization and risk,” they wrote. “As a starting point, we recommend adding to readiness metrics new layers of analysis utilizing artificial intelligence to leverage the military’s data-rich environment. Such a framework would enable military service chiefs to better prioritize investments in research, development and future force design initiatives, rather than spending the majority of their resources on making decades-old capabilities ready for employment.”</p>



<p>This framework would deliver forces that combatant commanders need today but also invest in capabilities needed for the future, they said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/">Algorithmic Warfare: Marines Lack Trust in Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/algorithmic-warfare-marines-lack-trust-in-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence and algorithmic irresponsibility: The devil in the machine?</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:20:05 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[algorithmic]]></category>
		<category><![CDATA[devil]]></category>
		<category><![CDATA[irresponsibility]]></category>
		<category><![CDATA[machine]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13585</guid>

					<description><![CDATA[<p>Source &#8211; https://techxplore.com/ The classic 1995 crime film The Usual Suspects revolves around the police interrogation of Roger &#8220;Verbal&#8221; Kint, played by Kevin Spacey. Kint paraphrases Charles Baudelaire, stating <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/">Artificial intelligence and algorithmic irresponsibility: The devil in the machine?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://techxplore.com/</p>



<p>The classic 1995 crime film <em>The Usual Suspects</em> revolves around the police interrogation of Roger &#8220;Verbal&#8221; Kint, played by Kevin Spacey. Kint paraphrases Charles Baudelaire, stating that &#8220;the greatest trick the Devil ever pulled was convincing the world he didn&#8217;t exist.&#8221; The implication is that the Devil is more effective when operating unseen, manipulating and conditioning behavior rather than telling people what to do. In the film&#8217;s narrative, his role is to cloud judgment and tempt us to abandon our sense of moral responsibility.</p>



<p>In our research, we see parallels between this and the role of artificial intelligence (AI) in the 21st century. Why? AI tempts people to abandon judgment and moral responsibility in just the same way. By removing a range of decisions from our conscious minds, it crowds out judgment from a bewildering array of human activities. Moreover, without a proper understanding of how it does this we cannot circumvent its negative effects.</p>



<p>The role of AI is so widely accepted in 2020 that most people are in essence completely unaware of it. Among other things, today AI algorithms help determine who we date, our medical diagnoses, our investment strategies, and what exam grades we get.</p>



<p><strong>Serious advantages, insidious effects</strong></p>



<p>With widespread access to granular data on human behavior harvested from social media, AI has permeated the key sectors of most developed economies. For tractable problems such as analyzing documents, it usually compares favorably with human alternatives that are slower and more error-prone, leading to enormous efficiency gains and cost reductions for those who adopt it. For more complex problems such as choosing a life-partner, AI&#8217;s role is more insidious: it frames choices and &#8220;nudges&#8221; choosers.</p>



<p>It is for these more complex problems that we see substantial risk associated to the rise of AI in decision-making. Every human choice necessarily involves transforming inputs (relevant information, feelings, etc.) into outputs (decisions). However every choice inevitably also involves a <em>judgment</em> – without judgment we might speak of a reaction rather than a choice. The judgmental aspect of choice is what allows humans to attribute responsibility. But as more complex and important choices are made, or at least driven, by AI, the attribution of responsibility becomes more difficult. And there is a risk that both public and private sector actors embrace this erosion of judgment and adopt AI algorithms precisely in order to insulate themselves from blame.</p>



<p>In a recent research paper, we have examined how reliance on AI in health policy may obfuscate important moral discussions and thus &#8220;deresponsibilize&#8221; actors in the health sector. (See &#8220;Anormative black boxes: artificial intelligence and health policy,&#8221;</p>



<p>Our research&#8217;s key insights are valid for a wider variety of activities. We argue that the erosion of judgment engendered by AI blurs—or even removes—our sense of responsibility. The reasons are:</p>



<ul class="wp-block-list"><li><strong>AI systems operate as black boxes</strong>. We can know the input and the output of an AI system, but it is extraordinarily tricky to trace back how outputs were deduced from inputs. This apparently intractable opacity generates a number of moral problems. A black box can be causally responsible for a decision or action, but cannot explain how it has reached that decision or recommended that action. Even if experts open the black box and analyze the long sequences of calculations that it contains, these cannot be translated into anything resembling a human justification or explanation.</li><li><strong>Blaming impersonal systems of rules</strong>. Organizational scholars have long studied how bureaucracies can absolve individuals of the worst crimes. Classic texts include Zygmunt Bauman&#8217;s <em>Modernity and the Holocaust</em> and Hannah Arendt&#8217;s <em>Eichmann in Jerusalem</em>. Both were intrigued by how otherwise decent people could participate in atrocities without feeling guilt. This phenomenon was possible because individuals shifted responsibility and blame to impersonal bureaucracies and their leaders. The introduction of AI intensifies this phenomenon because now even leaders can shift responsibility to the AI systems that issued policy recommendations and framed policy choices.</li><li><strong>Attributing responsibility to artifacts rather than root causes</strong>. AI systems are designed to recognize patterns. But, contrary to human beings, they do not understand the meaning of these patterns. Thus, if most crime in a city is committed by a certain ethnic group, the AI system will quickly identify this correlation. However, it will not consider whether this correlation is an artifact of deeper, more complex, causes. Thus, an AI system can instruct police to discriminate between potential criminals based on skin color, but cannot understand the role played by racism, police brutality and poverty in causing criminal behavior in the first place.</li><li><strong>Self-fulfilling prophecies that are not blameable on anyone</strong>. Most widely used AIs are fed by historical data. This can work in the case of detecting physiological conditions such as skin cancers. The problem, however, is that AI-classification of <em>social categories</em> can operate as a self-fulfilling prophecy in the long run. For instance, researchers on AI-based gender discrimination acknowledge the intractability of algorithms that end up exaggerating, without ever introducing, pre-existing social bias against women, transgendered and non-binary persons.</li></ul>



<p><strong>What can we do?</strong></p>



<p>There is no silver bullet against AI&#8217;s deresponsibilizing tendencies and it is not our role, as scholars and scientists, to decide when AI-based input should be taken for granted and when it should be contested. This is a decision best left to democratic deliberation. (See &#8220;Digital society&#8217;s techno-totalitarian matrix&#8221; in <em>Post-Human Institutions and Organizations: Confronting the Matrix</em>.) It is, however, our role to stress that, in the current state of the art, AI-based calculations operate as black boxes that make moral decision-making more, rather than less, difficult.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/">Artificial intelligence and algorithmic irresponsibility: The devil in the machine?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-and-algorithmic-irresponsibility-the-devil-in-the-machine/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UBS looks to machine learning to plug FX liquidity gaps</title>
		<link>https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/</link>
					<comments>https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 May 2019 06:17:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithmic]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DIGITAL DRIVE]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ORCA]]></category>
		<category><![CDATA[UBS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3493</guid>

					<description><![CDATA[<p>Source:- kfgo.com ZURICH (Reuters) &#8211; As global currency markets grapple with a growing number of flash crashes triggered by shutdowns in algorithmic trading systems when volatility spikes, UBS <a class="read-more-link" href="https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/">UBS looks to machine learning to plug FX liquidity gaps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- kfgo.com</p>
<p>ZURICH (Reuters) &#8211; As global currency markets grapple with a growing number of flash crashes triggered by shutdowns in algorithmic trading systems when volatility spikes, UBS is utilizing machine learning technology to carry on dealing.</p>
<p>While algorithmic trading has played a growing role in the $5.1 trillion-a-day global foreign exchange market, accounting for up to a fifth of all trading and about 70 percent of all orders placed on multi-dealer currency platform EBS, machine learning is still relatively new.</p>
<p>UBS&#8217;s ORCA-Direct learns in real time, utilizing historical trading data to find the bank&#8217;s clients the best available liquidity when volatility rises.</p>
<p>First rolled out to a limited numbers of clients in May 2018, it helped volumes in the bank&#8217;s algorithmic FX business double in 2018. That made UBS the fastest-growing FX algo broker by market share from the second to the fourth quarter, according to Boston Consulting Group and Expand, a benchmarking house for financial institutions.</p>
<p>UBS is not the only large bank investing millions of dollars in algo technology as it cuts back on trading teams and relies more on automatically computed strategies to trade more efficiently.</p>
<p>JP Morgan, which also reported double-digit growth in its algorithmic trading business in recent months, has released a new machine learning algorithm, and Citibank is another top player in electronic currency trading.</p>
<p>MICROSECONDS</p>
<p>ORCA&#8217;s machine learning enables the algorithm to determine within microseconds the best platforms and execution sequence to use, estimating the probability of trading and market impact for each specific order and reducing costs for the bank&#8217;s clients.</p>
<p>That can be crucial in the fragmented currency market, where about 70 different platforms exist with multiple banks, hedge funds and technology firms jostling for market share.</p>
<p>The growing number of flash crashes – where prices of currencies can swing wildly within seconds – also complicates matters.</p>
<p>&#8220;What is unique about ORCA is the machine learning we put into it,&#8221; said Chris Purves, head of the bank&#8217;s FRC Strategic Development Lab. &#8220;Clients&#8230;can see their executions improving, they can see their fill rates improving.&#8221;</p>
<p>While first-quarter figures are not yet available for algorithmic trading performance, UBS said growth has continued this year.</p>
<p>The bank expanded ORCA to U.S. Treasury trading in late 2018, with further roll-outs expected in the Foreign Exchanges, Rates and Credit (FRC) space.</p>
<p>DIGITAL DRIVE</p>
<p>Investment Bank Chief Operating Officer Beatriz Martín Jiménez said banks have traditionally focused on premium clients but new digital technologies would allow access to a broader customer base.</p>
<p>&#8220;The way forward for everybody is to build a platform where you&#8217;re going to be able to serve a much wider group of clients at a very low margin of cost per new client,&#8221; she said.</p>
<p>Its innovation lab expects to complete a further one or two major projects over coming months.</p>
<p>Meanwhile, from Tuesday until Thursday, UBS will hold sessions for employees on digitalisation and innovation.</p>
<p>&#8220;We&#8217;ve learned along way there&#8217;s a sort of science of innovation,&#8221; Purves said. &#8220;You can teach people to be better at innovation.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/">UBS looks to machine learning to plug FX liquidity gaps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ubs-looks-to-machine-learning-to-plug-fx-liquidity-gaps/feed/</wfw:commentRss>
			<slash:comments>28</slash:comments>
		
		
			</item>
	</channel>
</rss>
