<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ethical Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ethical/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ethical/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 03 Jul 2021 10:00:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE</title>
		<link>https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Jul 2021 10:00:45 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[dilemma]]></category>
		<category><![CDATA[ethical]]></category>
		<category><![CDATA[INSTANCES]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14734</guid>

					<description><![CDATA[<p>Source &#8211; INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE With the growing use of artificial intelligence, instances of ethical dilemmas are rising. ‘To be or not to be’- the ethical dilemma is a constant in human life whenever it comes to taking a decision. In the world of technology, artificial intelligence comes <a class="read-more-link" href="https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/">INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE</p>



<h2 class="wp-block-heading">With the growing use of artificial intelligence, instances of ethical dilemmas are rising.</h2>



<p>‘To be or not to be’- the ethical dilemma is a constant in human life whenever it comes to taking a decision. In the world of technology, artificial intelligence comes closest to human-like attributes. It aims to imitate the automation of human intelligence in times of operation or taking a decision. However, the AI machine can’t take an independent decision and the mentality of the programmer reflects upon the operation of the AI Machine. While driving an autonomous car, in the chance of an accident, the car intelligence might have to decide whom to save first or should a child be saved before an adult. Several ethical challenges that are faced by AI machines are lack of transparency, biased decisions, surveillance practices for data gathering and privacy of court users, and fairness and risk for Human Rights and other fundamental values.</p>



<h4 class="wp-block-heading"><strong>Influences of Human Behavior</strong></h4>



<p>While human attention and patience are limited, the emotional energy of a machine is not – rather, a machine’s experience of limitations is technical. Although this could benefit certain fields like customer service, this limitless capacity could create human addiction to robot affection. Using this idea, many apps are using algorithms to nurture addictive behavior. Tinder, for example, is designed to keep users on the A.I.-powered app by instigating less likely matches the longer a user engages in a session.</p>



<h4 class="wp-block-heading"><strong>Training Biases</strong></h4>



<p>One of the most pressing and widely-discussed A.I. ethics issues is the training of bias in systems that involve predictive analysis, like hiring or crime. Amazon most famously ran into a hiring bias issue after training an A.I.-powered algorithm to present strong candidates based on historical data. Because previous candidates were chosen through human bias, the algorithm favored men as well. This showcased gender bias in Amazon’s hiring process, which is not ethical. In March, the NYPD disclosed that it developed Patternizer, an algorithmic machine-learning software that shifts through police data to find patterns and connect similar crimes, and has used it since 2016. The software is not used for rape or homicide cases and excludes factors like gender and race when searching for patterns. Although this is a step forward from previous algorithms that were trained on racial bias to predict crime and parole violation, actively removing bias from historical data sets is not standard practice. That means this trained bias is at best an insult and inconvenience; at worst, a risk to personal freedom a and catalyst of systematic oppression.</p>



<h4 class="wp-block-heading"><strong>Making of Fake News</strong></h4>



<p>Deep Fakes are quite popular in the usage of AI. It is a technique that uses A.I. to superimpose images, videos, and audio onto others, creating a false impression of original media and audio, most often with malicious intent. Deep fakes can include face swaps, voice imitation, facial re-enactment, lip-syncing, and more. Unlike older photo and video editing techniques, deep fake technology will become progressively more accessible to people without great technical skills. Similar tech was used during the last U.S. presidential election when Russia implemented Reality Hacking (like the influence of fake news on our Facebook feeds). This information warfare is becoming commonplace and exists not only to alter acts but to powerfully change opinions and attitudes. This practice was also used during the Brexit campaign and is increasingly being used as an example of the rising political tensions and confusing global perspectives.</p>



<h4 class="wp-block-heading"><strong>Privacy Concerns of the Consumers</strong></h4>



<p>Most consumer devices (from cell phones to blue-tooth enabled light bulbs) use artificial intelligence to collect our tour to provide better, more personalized service. If consensual, and if the data collection is done with transparency, this personalization is an excellent feature. Without consent and transparency, this feature could easily become malignant. Although a phone tracking app is useful after leaving your iPhone in a cab, or losing your keys between the couch cushions, tracking individuals could be un for at a small scale (like domestic abuse survivors seeking privacy) or at a large scale (like government compliance).</p>



<p>These instances answer the question of how artificial intelligence raises the question of ethical dilemmas. It also confirms the fact that AI can only be ethical once its creators and programmers want it to be.</p>
<p>The post <a href="https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/">INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/instances-of-ethical-dilemma-in-the-use-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to avoid the ethical pitfalls of artificial intelligence and machine learning</title>
		<link>https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 11 Jun 2021 05:04:12 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ethical]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[pitfalls]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14185</guid>

					<description><![CDATA[<p>Source &#8211; https://newsroom.unsw.edu.au/ While many organisations are implementing artificial intelligence and machine learning solutions, there are costs and risks that need to be carefully considered.  The modern business world is littered with examples where organisations hastily rolled out artificial intelligence (AI) and machine learning (ML) solutions without due consideration of ethical issues, which has led to <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/">How to avoid the ethical pitfalls of artificial intelligence and machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://newsroom.unsw.edu.au/</p>



<p><strong>While many organisations are implementing artificial intelligence and machine learning solutions, there are costs and risks that need to be carefully considered. </strong></p>



<p>The modern business world is littered with examples where organisations hastily rolled out artificial intelligence (AI) and machine learning (ML) solutions without due consideration of ethical issues, which has led to very costly and painful learning lessons. Internationally, for example, IBM is getting sued after allegedly misappropriating data from an app while Goldman Sachs is under investigation for using an allegedly discriminatory AI algorithm. A closer homegrown example was the Robodebt debacle, in which the federal government deployed ill-thought-through algorithmic automation to send out letters to recipients demanding repayment of social security payments dating back to 2010. The government settled a class action against it late last year at an eye-watering cost of $1.2 billion after the automated mailouts system targeted many legitimate social security recipients. </p>



<p>“That&nbsp;targeting of legitimate recipients&nbsp;was clearly illegal,” says UNSW Business School’s Peter Leonard, a Professor of Practice for the School of Information Systems &amp; Technology Management and the School of Management and Governance at UNSW Business School. “Government decision-makers&nbsp;are required by law to take into account&nbsp;all&nbsp;relevant considerations&nbsp;and only relevant considerations, and&nbsp;authorising automated demands to be made of legitimate recipients was not&nbsp;proper application of&nbsp;discretions&nbsp;by&nbsp;an administrative decision-maker.”&nbsp;</p>



<p>Prof. Leonard says Robodebt is an important example of what can go wrong with algorithms in which due care and consideration is not factored in. “When automation goes wrong, it usually does so quickly and at scale. And when things go wrong at scale, you don’t need each payout to be much for it to be a very large amount when added together across a cohort.” </p>



<h3 class="wp-block-heading">Why translational work is&nbsp;required&nbsp;</h3>



<p>Technological developments are very often ahead of both government laws and regulations as well as organisational policies around ethics and governance. AI and ML are classic examples of&nbsp;this&nbsp;and Prof. Leonard explains there is major “translational” work to be done in order to bolster companies’ ethical frameworks.&nbsp;&nbsp;</p>



<p>“There’s still a very large gap between government policymakers, regulators, business, and academia. I don’t think there are many people today bridging that gap,” he observes. “It requires translational work, with translation between those different spheres of activities and ways of thinking. Academics, for example, need to think outside their particular discipline,&nbsp;department&nbsp;or school. And they have to think about how businesses and other organisations actually make decisions, in order to adapt their view of what needs to be done to suit the dynamic and unpredictable nature of business activity nowadays.&nbsp;So&nbsp;it isn’t easy, but it never was.”&nbsp;</p>



<p>Prof. Leonard says organisations are “feeling their way to better&nbsp;behaviour&nbsp;in this space”. He&nbsp;thinks&nbsp;that many&nbsp;organisations&nbsp;now care about adverse societal impacts of their business practices, but&nbsp;don’t&nbsp;yet know how to build governance and assurance to mitigate risks associated with data and technology-driven innovation.&nbsp;“They don’t know how to translate what are often pretty high-level statements&nbsp;about&nbsp;corporate social responsibility,&nbsp;good&nbsp;behaviour&nbsp;or ethics – call it what you will –&nbsp;into consistently reliable action,&nbsp;to give practical effect to those principles in how they make their business decisions every day. That gap creates real vulnerabilities for many corporations,” he says.&nbsp;</p>



<p>Data privacy serves as an example of what should be done in this space. Organisations have become quite good at working out how to evaluate whether a particular form of corporate&nbsp;behaviour&nbsp;is appropriately protective of the data privacy rights of individuals. This is achieved through “privacy impact assessments” which are overseen by privacy officers, lawyers and other professionals who are trained to understand whether or not a particular practice in the collection and handling of personal information about individuals may cause harm to those individuals.&nbsp;</p>



<p>“There’s an example of how what can be a pretty amorphous concept – a breach of privacy – is reduced to something concrete and given effect through a process that leads to an outcome with recommendations about what the business should do,” Prof. Leonard says. </p>



<h3 class="wp-block-heading">Bridging functional gaps in organisations&nbsp;</h3>



<p>Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. “There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,” says Prof. Leonard. “So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.”&nbsp;</p>



<p>This problem exists in many fields. One&nbsp;field&nbsp;in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology – which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams&nbsp;don’t&nbsp;speak the same language as each other in order to arrive at a strategically cohesive decision.&nbsp;</p>



<p>Some organisations are addressing this issue by creating new roles, such as a chief data officer or customer experience officer, who is responsible for bridging functional disconnects in applied ethics. Such individuals will often have a background in or experience with technology, data science and marketing, in addition to a broader understanding of the business than is often the case with the CIO.&nbsp;</p>



<p>“We’re at a transitional point in time where the traditional view of IT and information systems management doesn’t work anymore, because many of the issues arise out of analysis and uses of data,” says Prof. Leonard. “And those uses involve the making of decisions by people outside the technology team, many of whom don’t understand the limitations of the technology in the data.”&nbsp;</p>



<h3 class="wp-block-heading">Why regulators&nbsp;need&nbsp;teeth&nbsp;</h3>



<p>Prof. Leonard was recently appointed to the NSW inaugural AI Government Committee – the first of its kind for any federal, state or territory government in Australia – to advise the NSW Minister for Digital Victor Dominello on how to deliver on key commitments in the state’s AI strategy. One focus for the committee is how to reliably embed ethics in how, when and why NSW government departments and agencies use AI and other automation in their decision-making.  </p>



<p>Prof. Leonard said governments and other organisations that publish aspirational statements and guidance on ethical principles of AI – but fail to go further – need to do better. “For example, the Federal Government’s ethics principles for uses of artificial intelligence by public and private sector entities were published over 18 months ago, but there is little evidence of adoption across the Australian economy, or that these principles are being embedded into consistently reliable and verifiable business practices”, he said.  </p>



<p>“What good is this? It is like the 10 commandments. They are a great thing. But are people actually going to follow them? And what are we going to do if they don’t?” Prof. Leonard said it is not worth publishing statements of principles unless they are supplemented with processes and methodologies for assurance and governance of all automation-assisted decision-making. “It is not enough to ensure that the AI component is fair, accountable and transparent: the end-to-end decision-making process must be reviewed”.</p>



<h3 class="wp-block-heading">Why organisations need&nbsp;tools&nbsp;</h3>



<p>While some regulation will&nbsp;also&nbsp;be needed to build the right incentives,&nbsp;Prof. Leonard said&nbsp;organisations need to first know how to assure good outcomes, before they are legally sanctioned&nbsp;and penalised&nbsp;for bad outcomes.&nbsp;“The problem for the public sector is more immediate than for the business and not for profit sectors, because poor algorithmic inferences leading to incorrect administrative decisions can directly contravene&nbsp;state and&nbsp;federal&nbsp;administrative law,” he said.&nbsp;</p>



<p>In the business and not for profit sectors, the&nbsp;legal&nbsp;constraints are more limited&nbsp;in scope (principally anti-discrimination&nbsp;and&nbsp;scope consumer protection law). Because the legal constraints are limited, Prof. Leonard observed, reporting of&nbsp;the&nbsp;Robodebt&nbsp;debacle has not led to&nbsp;similar&nbsp;urgency in the business sector as&nbsp;that in&nbsp;the&nbsp;federal government sector.&nbsp;</p>



<p>Organisations need to be empowered to think&nbsp;methodically across and&nbsp;through&nbsp;possible harms, while&nbsp;there also&nbsp;needs to be adequate transparency in the system – and government policy and regulators should not lag too far behind.&nbsp;“A combination of these elements will help reduce the reliance on ethics within organisations internally, as they are provided with a strong framework for sound decision-making.&nbsp;And then you come behind with a big stick if&nbsp;they’re&nbsp;not using the tools or they’re not using the tools properly. Carrots alone and sticks alone never work; you need the combination of two,” said Prof.&nbsp;Leonard.&nbsp;</p>



<p>The Australian Human Rights Commission’s report on human rights and technology was recently tabled in Federal Parliament. Human Rights Commissioner Ed Santow stated that the combination of learnings from Robodebt and the Report’s findings provide “a ‘once-in-a-generation challenge and opportunity to develop the proper regulations around emerging technologies to mitigate the risks around them and ensure they benefit all members of the community”. Prof Leonard observed that “the challenge is as much to how we govern automation aided decision making within organisations – the human element – as it is to how we assure that technology and data analytics are fair, accountable and transparent.   </p>



<h3 class="wp-block-heading">Risk management, checks and&nbsp;balances&nbsp;</h3>



<p>A good example of the need for this can be seen in the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry. It noted key individuals who assess and make recommendations in relation to prudential risk within banks are relatively powerless compared to those who control profit centres. “So, almost by definition, if you regard ethics and policing of economics as a cost within an organisation, and not an integral part of the making of profits by an organisation, you will&nbsp;end up with bad results because you don’t value highly enough the management of prudential, ethical or corporate social responsibility risks,” says Prof. Leonard. “You name me a sector, and I’ll give you an example of it.”&nbsp;</p>



<p>While he notes that larger organisations “will often fumble their way through to a reasonably good decision”, another key risk exists among smaller organisations. “They don’t have processes around checks and balances and haven’t thought about corporate social responsibility yet because&nbsp;they’re not required to,” says Prof. Leonard. Small organisations often work on the mantra of “moving fast and breaking things” and this approach can have a “very big impact within a very short period of time”,&nbsp;thanks to the potentially rapid growth rate of businesses in a digital economy.&nbsp;</p>



<p>“They’re the really dangerous ones, generally. This means the tools that you have to deliver have to be sufficiently simple and straightforward that they are readily applied, in such a way that an agile ‘move fast and break things&#8217; type-business will actually apply them and give effect to them&nbsp;before they break things that really can cause harm,” he says.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/">How to avoid the ethical pitfalls of artificial intelligence and machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-avoid-the-ethical-pitfalls-of-artificial-intelligence-and-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>We Need Ethical Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 20 Feb 2021 05:50:02 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Cassandras]]></category>
		<category><![CDATA[ethical]]></category>
		<category><![CDATA[Need]]></category>
		<category><![CDATA[Positive]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12960</guid>

					<description><![CDATA[<p>Source &#8211; https://www.cmswire.com/ Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to <a class="read-more-link" href="https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/">We Need Ethical Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.cmswire.com/</p>



<p>Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.</p>



<p>We need to heighten our awareness around the changes that AI demands in our thinking. If we don’t, AI will trigger embarrassing situations, erode reputations and damage businesses.</p>



<h2 class="wp-block-heading">Positive and Negative Results From Using AI</h2>



<p>Two years ago, Amazon abandoned the AI tool it used to recruit employees. The tool, which the company trained using resumes submitted to the company over a decade, preferred male applicants. Recently, Twitter apologized for deploying an image cropping AI which preferred white faces over black. These are embarrassing (and unforgivable) outcomes of AI, but the ethical implications are clear.  </p>



<p>By contrast, the example of a South Korean national broadcaster, SBS, using AI to render songs in the voice of folk-rock singer Kim Kwang-Seok is delightful but considerably more complex. The popular singer has been dead for 25 years, yet continues to have a large fan following. SBS used 20 songs by Kim Kwang-Seok as a training tool and another 700 Korean folk songs to sharpen the accuracy of the AI. The AI now mimics any song in Kim Kwang-Seok’s style. A song, originally by Kim Bum-soo, rendered in the voice of Kim Kwang-Seok using AI, aired late in January. It was so perfect that it brought tears to the eyes of Kim Kwang-Seok fans. Music executives on the other hand were baffled: Who should the work be attributed to? Who owns the copyright for the work? Who will be paid royalties for the work? Will it be the AI programmer? The producer? For the curious, SBS paid a one-off fee to Kim Kwang-Seok&#8217;s family for borrowing his voice in the show. But publishing the song commercially presents perplexing questions. </p>



<p>Tomorrow’s songs need not necessarily be written by humans either. OpenAI&#8217;s text generators, like generative pre-training 3 (GPT-3), could use deep learning/machine learning to write original songs that appear to be penned by Kim Bum-soo or any other song writer. This opens limitless possibility to continue to produce work by an artist long after their death. Could this mean that AI can write and direct &#8220;2050: Beyond the Future&#8221; to keep alive the cinematic magic created by Arthur C. Clarke and Stanley Kubrick with &#8220;2001: A Space Odyssey&#8221;?</p>



<p>GPT-3 has the potential to do that. Last June it sent powerful waves across the AI community when Sharif Shameem, the app development head of a startup, used it to construct a program by simply describing a UI in plain English. GPT-3 responded by spitting out JSX code. That code produced a UI matching what Shameem wanted. Shameen said, “I only had to write two samples to give GPT-3 context for what I wanted it to do. It then properly formatted all of the other samples.”</p>



<p>GPT-3 doesn’t only reproduce “stuff” like humans. It is a performer as well. In one instance, it was given code in Python and asked to describe what the code does. The program not only did that, it also offered improvements and suggestions on where to post it after the improvement. GPT-3 can identify paintings from descriptions and recommend books. It can write entire articles for publications. In one instance, GPT-3 managed to express a bunch of popular movies in emoji. The extraordinary part? GPT-3 requires no training. It uses 175 billion parameters (by comparison, the closest anything comes to GPT-3 is Microsoft&#8217;s Turing NLG, which uses 17 billion parameters) to generate text that sounds human. You could use it to write your next quarterly report and save some valuable time.</p>



<h2 class="wp-block-heading">The Danger of Deep Fakes</h2>



<p>There are obvious social dangers in deploying AI like this, the most direct being bad training data used by machine learning systems leading to the Amazon recruitment breakdown or the Twitter image cropping fail. But worse lurks around the corner. It is easy to use capabilities of the type used by the Korean broadcaster and those of GPT-3 to produce deep fakes.</p>
<p>The post <a href="https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/">We Need Ethical Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/we-need-ethical-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>An Ethical Framework for Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Jun 2020 07:37:00 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ethical]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9406</guid>

					<description><![CDATA[<p>Source: law.com China has a population of approximately 1.4 billion people and the Chinese government is reportedly using a combination of artificial intelligence (AI) and facial recognition software to monitor their movements and online activities. Even more troubling, China is using the same technology to track and control a Muslim minority group, the Uyghurs. China <a class="read-more-link" href="https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/">An Ethical Framework for Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: law.com</p>



<p>China has a population of approximately 1.4 billion people and the Chinese government is reportedly using a combination of artificial intelligence (AI) and facial recognition software to monitor their movements and online activities. Even more troubling, China is using the same technology to track and control a Muslim minority group, the Uyghurs. China has subverted the potential of artificial intelligence to impose a form of racist social controls. AI offers new opportunities to enhance business productivity and enrich the personal lives of individuals.</p>



<p>Without a broad agreement on the ethical implementation of AI, the still untapped potential of AI can be corrupted.</p>



<p>This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book <em>Responsible AI </em>published by the International Technology Law Association in 2019. We have previously considered the technological elements of AI, facial recognition and personal privacy issues in our recent columns published here, which may provide some useful background for those new to the subject of AI. See “Artificial Intelligence: The Fastest Moving Technology,” NYLJ (March 9, 2020); “Waking Up to Artificial Intelligence,” NYLJ (Feb. 10, 2020).</p>



<h4 class="wp-block-heading">Ethical Purpose</h4>



<p>Organizations that develop AI systems have a great responsibility to understand how the system will be used and that its implementation will not be harmful to society. AI system developers should require that the purpose of the software implementation be identified in reasonable detail. They must ensure that the purposes of the new AI systems are ethical and not intentionally harmful.</p>



<p>As the full potential of AI for both good and harm is recognized by national governments, some regulatory statutes or rules will follow. Laws that regulate AI should promote ethical uses that do not cause harm, avoid unreasonable disruptions, and do not promote the distribution of false information.</p>



<p>AI is already being used in the workplace to support automation and speed or eliminate routine administrative tasks. Organizations that develop or deploy AI systems should consider the net effects of any implementation on its employees and their work. In some instances, workers will be displaced by automated systems. To gain greater understanding and acceptance of AI systems on their employees, businesses should allow the affected workers to participate in the decision-making process.</p>



<p>AI systems and automation usually increase efficiency and, as a result, workers are replaced by these systems. To promote efficiency and productivity, governments should consider creating programs for any displaced workers to learn new useful skills. Similarly, governments should promote educational policies to prepare children with the skills they will need for the emerging new economy, including life-long learning.</p>



<p>The implementation of AI systems may have an adverse impact on the environment. When developing AI systems, organizations should assess the environmental impact of these new systems. Government should put into effect statutes or rules that ensure complete and transparent investigations of any adverse or unanticipated environmental impacts of AI systems.</p>



<p>Unfortunately, AI systems have been recognized as creating strategic advantages in weapons systems. The use of lethal autonomous weapon systems (LAWS) should respect international principals of humanitarian law, including for example, the Geneva Conventions of 1949. &nbsp;LAWS can be both accurate and deadly. As such, LAWS should always be under human control and oversight in every situation where they are used in a conflict.</p>



<p>The recent very public policy disputes relating to posts on Twitter and Facebook reveal how AI may be used to weaponize false or misleading information. Companies that develop or deploy AI systems to promote or filter information on Internet platforms, including social media, should take measures to minimize the spread of false or misleading information. It is recommended that these systems should prove a means for users to flag potentially false or harmful content. Government agencies should provide clear guidelines to identify prohibited content that respects the rights and equality of individuals.</p>



<h4 class="wp-block-heading">Transparency and Explainability</h4>



<p>Transparency refers to the duty of every business and government entity to inform customers and citizens that they are interacting with AI systems. At a minimum, users should be provided with information about what the systems does, how it performs its tasks and the specifications and/or data used in training the system. The goal of transparency is to avoid creating an AI system that functions as an opaque “black box.”</p>



<p>Explainability refers to the duty of organizations using an AI decision-making process to provide accurate information in human understandable terms as to how the decisions/outcomes were reached. For example, if an AI system is used to process a mortgage loan application the loan applicant should be able to find out the factors supporting a credit decision including credit ratings, quality and location of the house and recent comparable sales in neighboring areas.</p>



<p>Transparency tends to preserve the public trust in AI systems and to demonstrate that the decisions made by an AI system are fair and impartial.</p>



<p>Transparency and explainability become increasingly important as the AI system deals with important decisions involving sensitive personal or financial data. In designing the AI system, transparency should meet the reasonable expectation of the average user. For this reason, transparency and explainability should be built into the design of any AI system.</p>



<h4 class="wp-block-heading">Fairness and Non-Discrimination</h4>



<p>The design of AI systems is a human endeavor and necessarily incorporates the knowledge, life experiences and prejudices of the designers. Companies that develop or deploy AI systems should make users aware that these systems reflect the goals and potential biases of the developers. As has been studied in other contexts, implicit bias is part of the human condition and AI system developers may incorporate these values into the methods and goals of a new AI system. In addition, AI systems are often “trained” by reviewing large data sets. For example, an AI system assisting in loan decisions might have used a data set that indicated certain racial or ethnic minority has a higher than average loan default rate. Screening for such a bias is necessary for a fair system.</p>



<p>The decisions made by AI systems must be fair and non-discriminatory as compared to non-discriminatory decisions made by humans. As such, in the design of AI systems fairness should be prioritized in the system’s algorithms and training data used. Without attention to fairness, AI systems have the potential of perpetuating and increasing bias, and this could have a broad social impact. To minimize these issues, AI systems with a significant social impact should be independently reviewed and tested periodically.</p>



<h4 class="wp-block-heading">Safety and Reliability</h4>



<p>AI systems currently control a wide variety of automated equipment and will have a broader impact when autonomous vehicles are in common use. Whether in the factory or traveling on the highway, AI systems will posse a potential danger to individuals. As to the issue of safety, AI system developers must ensure that AI systems will perform correctly, without harming users, resources, or the environment. It is essential to minimize unintended consequences and errors in the operation of any system.</p>



<p>These AI controlled systems must also operate reliably. Reliability refers to the consistency of performance, i.e., the probability of performing a function without a failure and within the system’s parameters over an extended period of time. Organizations that develop or deploy AI systems in conjunction with a piece of equipment must clearly define the principles underlying its operation and the boundaries of its decision-making powers. When safety is a priority, the appropriate government agency should require the testing of AI systems to ensure reliability. The systems should be trained on data sets that are as “error-free” as possible. When an AI system is involved in an incident of an unanticipated or adverse/fatal outcome it should be subject to a transparent investigation.</p>



<p>The possibility of personal injury and the potential liability raises a host of legal concerns. Legislators should consider whether the current legal framework, including product liability law, requires adjustments to meet the unique characteristics of AI systems.</p>



<p>For a more detailed review of the above issues the book <em>Responsible AI</em> can be purchased from the International Technology Law Association.</p>
<p>The post <a href="https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/">An Ethical Framework for Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/an-ethical-framework-for-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence and our ethical responsibility</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 18 Mar 2020 06:31:29 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ethical]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[progressing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7518</guid>

					<description><![CDATA[<p>blog.timesunion.com Artificial Intelligence (AI) was originally conceived as replicating human intelligence. That turns out to be harder than once thought. What is rapidly progressing is deep machine learning, with resulting artificial systems able to perform specific tasks (like medical diagnosis) better than humans. That’s far from the integrated general intelligence we have. Nevertheless, an artificial <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/">Artificial Intelligence and our ethical responsibility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>blog.timesunion.com</p>



<p>Artificial Intelligence (AI) was originally conceived as replicating human intelligence. That turns out to be harder than once thought. What is rapidly progressing is deep machine learning, with resulting artificial systems able to perform specific tasks (like medical diagnosis) better than humans. That’s far from the integrated general intelligence we have. Nevertheless, an artificial system for the latter may yet be inevitable in the future. Some foresee a coming “singularity” when AI surpasses human intelligence and then takes over its own further evolution. Which changes everything.</p>



<p>Much AI fearmongering warns this could be a mortal threat to us. That superior AI beings could enslave or even eliminate us. I’m extremely skeptical toward such doomsaying; mainly because AI would still be imprisoned under human control. (“HAL” in&nbsp;<em>2001&nbsp;</em>did get unplugged.) Nevertheless, AI’s vast implications raise many ethical issues, much written about too.</p>



<p>One such article, with a unique slant, was by Paul Conrad Samuelsson in Philosophy Now magazine. He addresses our ethical obligations toward AI.</p>



<p>Start from the question of whether any artificial system could ever possess a humanlike conscious self. I’ve had that debate with David Gelernter, who answered no. Samuelsson echoes my position, saying “those who argue against even the theoretical possibility of digital consciousness [disregard] that human consciousness somehow arises from configurations of unconscious atoms.” While Gelernter held that our neurons can’t be replicated artificially, I countered that their functional equivalent surely can be. Samuelsson says that while such “artificial networks are still comparatively primitive,” eventually “they will surpass our own neural nets in capacity, creativity, scope and efficiency.”</p>



<p>And thus attain consciousness with selves like ours. Having the ability to feel — including to suffer.</p>



<p>I was reminded of Jeremy Bentham’s argument against animal cruelty: regardless of whatever else might be said of animal mentation, the dispositive fact is their capacity for suffering.</p>



<p>Samuelsson considers the potential for AI suffering a very serious concern. Because, indeed, with AI capabilities outstripping the human, the pain could likewise be more intense. He hypothesizes a program putting an AI being into a concentration camp, but on a loop with a thousand reiterations per second. Why, one might ask, would anyone do that? But Samuelsson then says, “Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world.”</p>



<p> That may still be far-fetched. Yet the next passage really caught my attention. “If this description does not stir you,” Samuelsson writes, “it may be because the concept of a trillion subjects suffering limitlessly inside a computer is so abstract to us that it does not entice our empathy. But this itself shows us” the problem. We do indeed have a hard time conceptualizing an AI’s pain as remotely resembling human pain. However, says Samuelsson, this is a failure of imagination. </p>



<p>Samantha, in the film, is a person, with all the feelings people have (maybe more). The fact that her substrate is a network of circuits inside a computer rather than a network of neurons inside a skull is immaterial. If anything, her aliveness did finally outstrip that of her human lover. And surely any suffering she’s made to experience would carry at least equal moral concern.</p>



<p>I suspect our failure of imagination regarding Samuelsson’s hypotheticals is because none of us has ever actually met a Samantha. That will change, and with it, our moral intuitions.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/">Artificial Intelligence and our ethical responsibility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-and-our-ethical-responsibility/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
