<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>bias Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/bias/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/bias/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 23 Feb 2021 10:24:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>ARTIFICIAL INTELLIGENCE AND BIAS: THE BUCK STOPS (W)HERE</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Feb 2021 10:24:53 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[BUCK]]></category>
		<category><![CDATA[HERE]]></category>
		<category><![CDATA[STOPS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13019</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ While biases will always be part of artificial intelligence, is it time for an AI renaissance? It is not surprising that many industries are <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/">ARTIFICIAL INTELLIGENCE AND BIAS: THE BUCK STOPS (W)HERE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">While biases will always be part of artificial intelligence, is it time for an AI renaissance?</h2>



<p>It is not surprising that many industries are turning to artificial intelligence (AI) technologies like machine learning to review vast amounts of data. Be it analyzing financial records to check if one qualifies for a loan or errors in legal contracts or determine if one suffers from schizophrenia; artificial intelligence has got you covered! However, is it totally foolproof or impartial? Can this modern technology be prone to bias like humans? Let us find out!</p>



<p>Bias risks differ for each business, industry, and organization. They can find their way into artificial intelligence systems through numerous ways. For instance, it can either be intentionally introduced into an AI system via a stealth attack or unintentionally, making it hard to ever be seen or discovered. It can also be due to humans who input already biased data that reflects their biased thinking or due to data sampling bias. We also have long tail biases that occur when certain categories are missing from the training data.</p>



<p>It is obvious that presence of bias in data can cause artificial intelligence model to become biased, but what is more dangerous is that the model can actually amplify bias. E.g., a team of researchers found that 67% of images of people cooking were women but the algorithm labeled 84% of the cooks as women. Deep learning (another AI technology) algorithms are increasingly being used to make life-impacting decisions, like in hiring employees, the criminal justice system and health diagnosis. In these scenarios, if the algorithms make incorrect decisions due to AI bias, the results would be devastating in the long run.</p>



<p>For instance, in 2016, Pro Publica, a nonprofit news organization, had critically analyzed risk assessment software powered by AI known as COMPAS. COMPAS has been used to predict the likelihood that a prisoner or accused criminal would commit further crimes if released. It was observed that the false-positive rate (labeled as “high-risk” but did not re-offend) was nearly twice as high for black defendants (error rate of 45%) as for white defendants (error rate of 24%). Apart from this, there are multiple instances where artificial intelligence tools misclassify/mislabeled/misidentified people due to their race, gender, and ethnicity. Like in the same year, when the Beauty.AI website employed AI robots as judges for beauty contests, it found that people with light skin were judged much more attractive than people with dark skin.</p>



<p>It is important to uncover unintentional artificial intelligence bias and&nbsp;align technology tools with diversity, equity and inclusion policies and values in the business domain. As per 2020 PwC AI Predictions 68% of organizations still need to address fairness in the AI systems they develop and deploy.</p>



<p>Often, machine learning, deep learning models are usually built in three phases: training, validation, and testing. Though bias can creep in long before the data is collected and at many other stages of the deep-learning process, bias influences the models in the training phase itself. Generally, parametric algorithms like linear regression, linear discriminant analysis, and logistic regression are prone to high bias. As artificial intelligence systems become more dependent on deep learning and machine learning, owing to their usefulness, tackling AI biases can get more tricky.</p>



<p>While the biases are being addressed in an accelerated manner, the key challenges lie in defining the bias itself. This is because what may sound bias to one developer or data scientist may not mean bias for another. Another concern is what guidelines should ‘fairness’ adhere to – is there any technical way to define fairness in artificial intelligence models? Also, it is important to note that varying explanations will create confusion and cannot be satisfied every time. Further, it is crucial to determine what shall be error rates and accuracy for different subgroups in a dataset. Next, data scientists, need to factor in the social context. If a machine learning model works perfectly in criminal justice scenarios, it does not imply it will be suitable for screening candidates for a job position. Hence social context matters!</p>



<p>No doubt that opting for diverse data can alleviate AI biases, by giving space for more data touchpoints and indicators that cater to different priorities and insights, it is not enough. Meanwhile, the presence of proxies for specific groups, make it hard to build a deep learning or any other AI model that is aware of all potential sources of bias.</p>



<p>Lastly, not all AI biases have a negative footprint nor influence. In such cases, explainable AI (XAI) can help to discern whether a model uses good bias or bad bias to make a decision. It also tells us which factors are more important when the model makes any decision. Though it will not eliminate biases, it will surely enable human users to understand, appropriately trust and effectively manage AI systems.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/">ARTIFICIAL INTELLIGENCE AND BIAS: THE BUCK STOPS (W)HERE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-and-bias-the-buck-stops-where/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Role Of Bias In Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Feb 2021 08:34:48 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[exponentially]]></category>
		<category><![CDATA[role]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12709</guid>

					<description><![CDATA[<p>Source &#8211; https://www.forbes.com/ Steve is the Head of Data Science and AI at Australian Computer Society, a proactive social media contributor and LinkedIn influencer. Artificial intelligence (AI) has <a class="read-more-link" href="https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/">The Role Of Bias In Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.forbes.com/</p>



<p><em>Steve is the Head of Data Science and AI at Australian Computer Society, a proactive social media contributor and LinkedIn influencer.</em></p>



<p>Artificial intelligence (AI) has evolved exponentially, from driverless vehicles to voice automation in households, and is no longer just a term from sci-fi books and movies. The future of artificial intelligence comes sooner than the projections that were seen in the futuristic&nbsp;<em>Minority Report</em>&nbsp;film. AI will become an essential part of our lives in the next few years, approaching the level of super-intelligent computers that transcend human analytical abilities. Imagine opening your car by coming near it or getting products delivered to your place via drones; AI can make it all a reality.</p>



<p>However, recent discussions about the algorithmic bias reflect the loopholes in the &#8220;so perfect&#8221; AI systems. The lack of fairness that results from the performance of a computer system is algorithmic bias. In algorithmic bias, the lack of justice mentioned comes in different ways but can be interpreted as one group&#8217;s prejudice based on a particular categorical distinction.</p>



<p>Human bias is an issue that has been well researched in psychology for years. It arises from the implicit association that reflects bias we are not conscious of and how it can affect an event&#8217;s outcomes. Over the last few years, society has begun to grapple with exactly how much these human prejudices, with devastating consequences, can find their way through AI systems. Being profoundly aware of these threats and seeking to minimize them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias in AI systems can take varied forms such as gender bias, racial prejudice and age discrimination.</p>



<p>The critical question to ask is: What is the root cause for introducing bias in AI systems, and how can it be prevented? In numerous forms, bias may infiltrate algorithms. Even if sensitive variables such as gender, ethnicity or sexual identity are excluded, AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent historical or social inequities.</p>



<p>Artificial Intelligence (AI): What’s In Store For 2021?The Next Generation Of Artificial Intelligence (Part 2)The Next Generation Of Artificial Intelligence</p>



<p>The role of data imbalance is vital in introducing bias. For instance, in 2016, Microsoft released an AI-based conversational chatbot on Twitter that was supposed to interact with people through tweets and direct messages. However, it started replying with highly offensive and racist messages within a few hours of its release. The chatbot was trained on anonymous public data and had a built-in internal learning feature, which led to a coordinated attack by a group of people to introduce racist bias in the system. Some users were able to inundate the bot with misogynistic, racist and anti-Semitic language. This incident was an eye-opener to a broader audience of the potential negative implications of unfair algorithmic bias in the AI systems.</p>



<p>Facial recognition systems are also under scrutiny. The class imbalance is a leading issue in facial recognition software. A dataset called &#8220;Faces in the Wild,&#8221; considered the benchmark for testing facial recognition software, had data that was 70% male and 80% white. Although it might be good enough to be used on lower-quality pictures, &#8220;in the wild&#8221; is a highly debatable topic.</p>



<p>Concerns are arising as to how to test facial recognition technologies transparently. On June 30, 2020, the Association for Computing Machinery (ACM) in New York City called for the cessation of private and government use of facial recognition technologies due to &#8220;clear bias based on ethnic, racial, gender and other human characteristics.&#8221; The ACM said that the bias caused &#8220;profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups.&#8221; Due to the pervasive nature of AI, it is crucial to address the algorithmic bias issues to make the systems more fair and inclusive.</p>



<p>Apart from algorithms and data, researchers and engineers developing these systems are also responsible for AI bias. According to VentureBeat, a Columbia University study found that &#8220;the more homogenous the [engineering] team is, the more likely it is that a given prediction error will appear.&#8221; This can create a lack of empathy for the people who face problems of discrimination, leading to an unconscious introduction of bias in these algorithmic-savvy AI systems.</p>



<p>The hidden use of AI systems in our society can be dangerous for marginalized people. Consequently, people don&#8217;t have the option to opt out of these AI systems&#8217; biased surveillance. Countries like the U.S. and China have deployed thousands of cameras, and the AI-enabled cameras track the movements of the people without their consent. It undermines those discriminated against, and it can also mitigate individuals&#8217; willingness to partake in the economy and culture.</p>



<p>By promoting distrust and delivering distorted outcomes, it lowers the potential of AI for industry and society. Company and corporate executives need to ensure that human decision-making is strengthened by the AI technologies they use. They are responsible for supporting scientific advancement and standards that can minimize AI bias.</p>



<p>Joy Buolamwini, a postgraduate researcher at the Massachusetts Institute of Technology, realizes the repercussions of algorithmic bias in our society, and to address it, she founded the Algorithmic Justice League. The organization&#8217;s primary goal is to highlight the social and cultural implications of AI bias using art and scientific research. The work of such organizations will be monumental in addressing obscure issues like AI bias. Along with scientific researchers, governments have to join forces to address the AI bias problem toward a more progressive and fair society.</p>



<p>In seeking to explain AI and science in general, one must determine the global societal complexities because most of the fundamental transition emerges at the social level.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/">The Role Of Bias In Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-role-of-bias-in-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s AutoML Zero lets the machines create algorithms to avoid human bias</title>
		<link>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/</link>
					<comments>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 16 Apr 2020 07:18:14 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Tech]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8210</guid>

					<description><![CDATA[<p>Source: thenextweb.com It looks like Google‘s working on some major upgrades to its autonomous machine learning development language ‘AutoML.’ According to a pre-print research paper authored by <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Google’s AutoML Zero lets the machines create algorithms to avoid human bias</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thenextweb.com</p>



<p>It looks like Google‘s working on some major upgrades to its autonomous machine learning development language ‘AutoML.’ According to a pre-print research paper authored by several of the big G’s AI researchers, ‘AutoML Zero’ is coming, and it’s bringing evolutionary algorithms with it.</p>



<p>AutoML is a tool from Google that automates the process of developing machine learning algorithms for various tasks. It’s user-friendly, fairly simple to use, and completely open-source. Best of all, Google‘s always updating it.</p>



<p>In its current iteration, AutoML has a few drawbacks. You still have to manually create and tune several algorithms to act as building blocks for the machine to get started. This allows it to take your work and experiment with new parameters in an effort to optimize what you’ve done. Novices can get around this problem by using pre-made algorithm packages, but Google‘s working to automate this part too.</p>



<p>Per the Google team’s pre-print paper:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.</p><p>Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging.</p><p>Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available.</p></blockquote>



<p>In other words: Google‘s figured out how to tap evolutionary algorithms for AutoML using nothing but basic math concepts. The developers created a learning paradigm in which the machine will spit out 100 randomly generated algorithms and then work to see which ones perform the best.</p>



<p>After several generations, the algorithms become better and better until the machine finds one that performs well enough to evolve. In order to generate novel algorithms that can solve new problems, the ones that survive the evolutionary process are tested against various standard AI problems, such as computer vision.</p>



<p>Perhaps the most interesting byproduct of Google‘s quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things we’d never think of.</p>



<p>According to the researchers, AutoML Zero already outperforms its predecessor and similar state-of-the-art machine learning-generation tools. Future research will involve setting a more narrow scope for the AI and seeing how well it performs in more specific situations using a hybrid approach that creates algorithms with a combination of ‘Zero’s’ self-discovery techniques and human-curated starter libraries.</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Google’s AutoML Zero lets the machines create algorithms to avoid human bias</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Critical thinking: How human intelligence can prevent bias in artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 14 Oct 2019 07:52:28 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[4IR]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[robotic]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4620</guid>

					<description><![CDATA[<p>Source: dailymaverick.co.za South Africa’s overall investment in artificial intelligence (AI) over the last decade is significant, with around $1.6-billion invested to date. These investments have seen businesses <a class="read-more-link" href="https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/">Critical thinking: How human intelligence can prevent bias in artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dailymaverick.co.za</p>



<p>South Africa’s overall investment in artificial intelligence (AI) over the last decade is significant, with around $1.6-billion invested to date. These investments have seen businesses experimenting with a range of different technologies, including chatbots, robotic process automation and advanced analytics.</p>



<p>This is a welcome reminder that fears about AI, automation and the impact of the Fourth Industrial Revolution (4IR) on the job market are sometimes overstated and alarmist. Microsoft, for example, is investing in a pair of data centres in South Africa that will create 100,000 jobs, imbue local workers with new, contemporary skills, and provide essential infrastructure for facing the economic challenges and opportunities to come.<strong></strong></p>



<p>But there are dangers.<strong></strong></p>



<p>With scrutiny and hindsight, the root of a failed AI project is often because of a gap between what was expected and what transpired or was realised. This gap between expectation and reality comes from biases, and these biases take numerous forms. Biases can emerge at various stages of projects – they may appear to be absent at the outset, but manifest as solutions age, or are applied to different projects than those for which they were originally intended.</p>



<p>One of the most common biases is business strategy bias or the belief that AI will address a problem which it is ill-suited to. Closely related is problem-statement bias, which is a misunderstanding of the sorts of use cases AI can address.</p>



<p>In both of these instances, missteps upfront have a cascading effect. Employ AI unnecessarily or misdirect its efforts and other aspects of the business invariably suffer. But it’s also possible that turning to AI could have unexpected consequences, even if it’s arguably the right tool for the job. Look at Google, which has found itself on the receiving end of internal dissent and protest as a result of the AI-powered Project Maven, an initiative to harness AI for better-targeted drone strikes.</p>



<p>Or, more worryingly, the recent debacle Boeing has faced with its now-grounded 737 Max aircraft. The evidence is mounting that rather than being a failure of AI, Boeing’s problems stem from a leadership failure: the company’s executives rushed design and production and failed to institute appropriate checks and measures because of a fear of having the company’s revenue base threatened by a competitor’s product.<strong></strong></p>



<p>With the power of hindsight, of course, many of AI’s failings, especially those attributable to one sort of bias or another, seem obvious. But it’s very difficult to predict the next big failure as it stands. If decision-makers want to anticipate problems rather than only being able to respond to them retroactively, they need to understand the risks of AI and the causes that underpin them.<strong></strong></p>



<p><strong>Data-based biases</strong></p>



<p>Even with an applicable instance of AI use, other potential pitfalls exist. AI models are based on two things: sets of data, and how those sets are processed. Incomplete or inaccurate data sets mean the raw materials upon which the algorithm is expected will inevitably produce inaccurate, false or otherwise problematic outcomes. This is called information bias.<strong></strong></p>



<p>Consider, for example, the Russian interference in the 2016 US presidential election. Facebook’s business model relies on its ability to provide hyper-targeted advertising to advertisers. The algorithms the social network uses to profile users and sell space to advertisers is constantly adjusted, but Facebook didn’t deem it a necessity to look for links between Russian ad buyers, political advertisements, and politically undecided voters in swing states.<strong></strong></p>



<p>Similarly, training chatbots on datasets like Wikipedia or Google News content is unlikely to produce the sort of quality results one would expect if the source material was gleaned from actual human conversations. The way news or encyclopaedias are structured simply doesn’t align with or account for the nuances and peculiarities of contemporary, text-based human conversation.</p>



<p>Closely linked to information bias is procedural bias and its sub-biases – how an algorithm processes what data matters, and how that data is chosen. If sampling or other data selection or abbreviation takes place, further room for error is introduced.<strong></strong></p>



<p>When analysing data, it’s essential the correct methodology is used. Applying the wrong algorithm necessarily leads to incorrect outcomes, and the decision as to which algorithm to apply can itself be the result of personal preferences or other biases in the more conventional sense.</p>



<p>It’ also essential to ensure that datasets, and the way they’re handled, don’t put companies in other potentially compromising positions. For example, global companies need to take care they don’t fall foul of new privacy regulations, such as the European Union’s General Data Protection Regulation (GDPR) or California’s Consumer Privacy Act (CCPA). Companies need to ensure that datasets that are meant to consist exclusively of anonymised data, for instance, don’t inadvertently include personally identifiable records elsewhere.</p>



<p>A variant of methodology bias is cherry-picking, where a person selects data or algorithms – or a combination of them – most likely to produce outcomes that align with their preconceptions. That is, where the dataset chosen is selected precisely because it’s the one most likely to support a pre-decided or preferred outcome. Accurate and useful outcomes depend on neutrality, and neutrality depends on people.<strong></strong></p>



<p>“We used to talk about garbage in, garbage out,” says Wendy Hall, a professor of computer science at Southampton University and the author of a review into artificial intelligence commissioned by the UK government. “Now, with AI, we talk about bias in, bias out.” Hall says how AI is designed is at least as crucial for curtailing bias as monitoring the inputs upon which it depends.<strong></strong></p>



<p><strong>Diversity and bias</strong></p>



<p>The technology sector has an undeniable and well-documented problem with representation and diversity. Women and minorities are grossly and starkly underrepresented, and the resultant homogeneity of industry participants means not only that the biases that come from a lack of diversity are destined to find their way into AI, but that spotting them after the fact – before they do any harm – is necessarily going to be difficult.<strong></strong></p>



<p>In a rapidly shifting cultural and legal landscape, diversity bias also risks causing companies reputational damage or leaving them open to costly litigation. Reputational damage might be more difficult to quantify than legal challenges, but its ramifications can be far further reaching, tarnishing a company for years to come.<strong></strong></p>



<p>Examples of where a lack of diversity has proven to be problematic for AI-based services include instances where facial recognition tools have been shown to be grossly inaccurate when applied to people with dark skin, crime detection services erroneously flagging minorities more than other subjects, and self-educating chatbot conversations with end-users rapidly descending into hate speech.<strong></strong></p>



<p>Broadly speaking, customers tend to be heterogeneous. Thus, AI systems built to address their needs must be similarly heterogeneous both in their design and in their responses.</p>



<p>How does one avoid introducing bias into AI due to a lack of diversity? Quite simply, by ensuring the teams behind the AI are diverse. It’s also key that diversity not only encompasses varied ethnicity but accounts for differences in background, class, lived experiences and other variances.<strong></strong></p>



<p>It’s also important to ensure there’s diversity all the way down the value chain because doing so means any biases that escape detection earlier on are vastly more likely to be picked up before the results or outcomes damage the business, whether internally or externally.<strong></strong></p>



<p>Greater representation internally also tends to foster trust externally. In a data-driven economy, having customers feel represented and trust a business means they’re more likely to be willing to provide it with personal data because they’re more likely to believe the products created from doing so will benefit them. As data is the fuel that powers AI, this, in turn, leads to better AI systems.<strong></strong></p>



<p>Varied and inclusive representation also guards against cultural bias, which is a bias whereby one’s own cultural mores inform one’s decision-making. Consider, for instance, how some cultures’ scripts are read from right to left – rather than the reverse as is standard in the West – the differences between Spanish-speaking South Americans and Continental Spanish speakers, or the enormous differences in dietary preferences than can exist between an immigrant neighbourhood and its neighbours in the very same city.</p>



<p><strong>Oversight and implementation</strong></p>



<p>Another link in the chain where bias can creep in is in oversight. If a committee or other overarching body in an organisation signs off on something it shouldn’t – whether due to ignorance, ineptitude or external pressures – an AI-motivated path may be undertaken that ought not to be. Alternatively, incorrect methodologies may be endorsed, or potentially harmful outcomes being seen to be tacitly or explicitly endorsed.</p>



<p>Meanwhile, once approval has been given, actual implementation may reveal that the real-world implications haven’t been considered. It’s here, too, that ethical considerations come to the fore. The output from an algorithm may suggest an action that doesn’t correspond with an organisation’s ethical positions or the culture in which it operates.</p>



<p>By way of example, an algorithm created five years ago – and devoid of bias when it was coded – may no longer be applicable or appropriate for today’s users, and using it will create biased outputs. It’s here that human intervention can be essential, and where transparency can be beneficial, because embracing it can broaden the audience that might be both able, and willing, to flag any problems.</p>



<p>The nuance of human intelligence is invaluable when it comes to shaping AI, and a reminder that it’s that intelligence that’s ultimately liable for anything generated by, or resulting from, placing faith in an artificial version of it.</p>



<p>The biases outlined above exist in people long before they do in machines, so while it’s imperative to weed them out of machines, it’s also an excellent opportunity for reflection. If we examine our own human weaknesses and failings and try to address them, we’re vastly less likely to introduce them into the AI systems we create, whether now or in the future.</p>
<p>The post <a href="https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/">Critical thinking: How human intelligence can prevent bias in artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/critical-thinking-how-human-intelligence-can-prevent-bias-in-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
