<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>systems Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/systems/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/systems/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 03 Jul 2021 10:14:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>TTCI R&#038;D: Machine Learning for Machine Vision Systems</title>
		<link>https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/</link>
					<comments>https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Jul 2021 10:14:24 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Machine Learningv]]></category>
		<category><![CDATA[systems]]></category>
		<category><![CDATA[TTCI]]></category>
		<category><![CDATA[Vision]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14749</guid>

					<description><![CDATA[<p>Source &#8211; https://www.railwayage.com/ RAILWAY AGE, JULY 2021 ISSUE: Reliable, real-time monitoring of in-service railcar components will enhance the potential for maintenance planning. BY ANISH POUDEL, PH.D – PRINCIPAL INVESTIGATOR I (NDT); ABE MEDDAH – PRINCIPAL INVESTIGATOR I; AND MATT WITTE, PH.D – SCIENTIST, TRANSPORTATION TECHNOLOGY CENTER, INC.&#160; Through the Association of American Railroads (AAR) Strategic <a class="read-more-link" href="https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/">TTCI R&#038;D: Machine Learning for Machine Vision Systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.railwayage.com/</p>



<p><strong>RAILWAY AGE, JULY 2021 ISSUE: Reliable, real-time monitoring of in-service railcar components will enhance the potential for maintenance planning.</strong></p>



<p><strong><em>BY ANISH POUDEL, PH.D – PRINCIPAL INVESTIGATOR I (NDT); ABE MEDDAH – PRINCIPAL INVESTIGATOR I; AND MATT WITTE, PH.D – SCIENTIST, TRANSPORTATION TECHNOLOGY CENTER, INC.&nbsp;</em></strong></p>



<p>Through the Association of American Railroads (AAR) Strategic Research Initiatives (SRI) program, Transportation Technology Center, Inc. (TTCI) has been assisting suppliers and other stakeholders in the development of machine vision technologies and related algorithms for evaluating railcar components and conditions. To enhance safety and reduce worker exposure to yard risk, North American railroads have begun to install machine vision inspection systems in revenue service. </p>



<p>Using commercially available deep learning system platforms, TTCI researchers developed and demonstrated three convolutional neural network-based applications for analyzing visual images. A convolutional neural network is a type of artificial neural network that uses machine learning algorithms to analyze digital images. Convolutional neural networks are more powerful and effective than traditional artificial neural networks at recognizing, interpreting, and categorizing large, unstructured data sets; particularly those comprised of visual imagery. The convolutional neural networks that TTCI demonstrated were used to identify and evaluate components on the railcar truck. Specifically, convolutional neural networks were developed to identify truck type, detect the location of a spring group and measure the compression of spring coils to determine load conditions.</p>



<p>A convolutional neural network must be trained to recognize and categorize data. This training is accomplished “by example” and requires two components: 1) A sampling of images representative of the type expected to be processed by the convolutional neural network, and 2) A specified desired output for when the network recognizes—or does not recognize—certain objects in each of these images. Grayscale JPEG images with 2048 × 2048 resolution were deemed adequate for developing and testing the applications. These images were obtained from TTCI’s manual viewer database system, which contains machine vision data from vision systems on site at the Federal Railroad Administration Transportation Technology Center (TTC), as well as images from a revenue service location <em>(Stewart, Monique, Matthew Witte, and Abe Meddah. August 2020. “In-Service Performance of a Truck Component Inspection System,” Federal Railroad Administration, Office of Research and Development, DOT/FRA/ORD-20/31, Washington, D.C.).</em> A total of 1,276 images were used to build and test these applications and to evaluate the overall possibility of machine learning technology.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/">TTCI R&#038;D: Machine Learning for Machine Vision Systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ttci-rd-machine-learning-for-machine-vision-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AREN’T ARTIFICIAL INTELLIGENCE SYSTEMS RACIST?</title>
		<link>https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/</link>
					<comments>https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 22 Mar 2021 06:14:35 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[application]]></category>
		<category><![CDATA[AREN’T]]></category>
		<category><![CDATA[RACIST]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13672</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ No wonder, Artificial Intelligence is the future. We’ve seen its application in possibly every field now. The problem isn’t with the technology, it is with the biasness that goes in, says Timnit Gebru. She goes on to add that it is built in a manner that replicates the white work force that’s <a class="read-more-link" href="https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/">AREN’T ARTIFICIAL INTELLIGENCE SYSTEMS RACIST?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<p>No wonder, Artificial Intelligence is the future. We’ve seen its application in possibly every field now. The problem isn’t with the technology, it is with the biasness that goes in, says Timnit Gebru. She goes on to add that it is built in a manner that replicates the white work force that’s mostly men-dominated making it. Right from her first lecture in Spain which is by far the world’s most important conference on AI till date, she has seen a vast difference in the number of men and women, obviously men being dominant in number. She highlights two things that she believes remained constant over a period – one, how technologically advance we’re becoming with every passing day and two, how bias the work culture is but the companies fail to acknowledge.</p>



<p>Later, Dr. Gebru co-founded an organization, Black in AI, a community of black researchers working in artificial intelligence. She completed her Ph.D. and was then hired by Google. It was during this time that she told Bloomberg News how AI suffers from what she called “sea of dudes” problem. This left everyone stunned. She talked about how she worked with hundreds of men over a period of 5 years and the number of women could be counted on fingers.</p>



<p>It just doesn’t end here. A few years back, a New York researcher saw how biased AI was against Black people. An incident wherein a Black researcher learned that an AI system couldn’t identify her face till she had put up a white mask raised eyebrows.</p>



<p>Amidst all this, Dr. Gebru was fired. She said that this was an aftermath of her criticism against Google’s minority hiring. When Dr. Mitchell defended, Google removed her too without leaving any comments. Now, this sparked arguments among the researchers and tech workers.</p>



<p>Things got worse when image recognition was what Google tried its hands on. The AI model was trained to categorize the photos on what was pictured – for example dogs, birthday party, food, etc. But, this is when one user saw a folder named “Gorillas”. On opening the same, he found about 80 photos that he had clicked with a friend during a concert. His friend was black. The point of discussion is that this AI model is trained by engineers who choose data.</p>



<p>Yet another case on the same lines is that of Deborah Raji, a black woman from Ottawa. She worked for a start-up and once she saw a page filled with faces. The company uses theses faces to train its facial recognition software. She kept scrolling only to find more than 80% images were of white people and more than 70% of those were men. She was working on a tool that’d automatically identify and remove pornography from images people posted to social networks. The system was meant to learn the difference between the pornographic and the anodyne. This is where problems stepped in. The G‑rated images were dominated by white people but pornography was not. This is why the system was beginning to identify Black people as pornographic. This is why choosing the right data matters and since the ones who chose this data were mostly white men, they didn’t find anything wrong with this.</p>



<p>Before working for Google, Dr. Gebru joined hands with Joy Buolamwini, a computer scientist at the MIT. Ms. Buolamwini, who is Black, too faced biasness when she was working. She narrated her experience quite a few times when an AI system recognized her face only when she wore a white mask.</p>



<p>During the later years, Joy Buolamwini and Deborah Raji joined hands to test the facial recognition technology from Amazon. It marketed its technology under the name Amazon Rekognition. They found that Amazon’s technology too faced difficulties while identifying the sex of female and darker-​skinned faces. Later, Amazon called for government regulation of facial recognition. The company did not step back from attacking the researchers both in private emails and public blog posts.</p>



<p>Later, Dr. Mitchell and Dr. Gebru came up with an open letter wherein they rejected Amazon’s argument and called on it to stop selling to law enforcement.</p>



<p>Dr. Gebru and Dr. Mitchell had struggled a lot to bring the change in the organizations that they were working with. But, that didn’t pay off.</p>



<p>Dr. Gebru came up with a research paper that she wrote with six other researchers, including Dr. Mitchell. The paper talks about a system built by Google that supports its search engine and how it can show bias against women and people of colour.</p>
<p>The post <a href="https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/">AREN’T ARTIFICIAL INTELLIGENCE SYSTEMS RACIST?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/arent-artificial-intelligence-systems-racist/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</title>
		<link>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/</link>
					<comments>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Mar 2021 11:54:03 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[adversarial]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[dodge]]></category>
		<category><![CDATA[helps]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13345</guid>

					<description><![CDATA[<p>Source &#8211; https://techxplore.com/ In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action—steer right, <a class="read-more-link" href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://techxplore.com/</p>



<p>In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward.</p>



<p>Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action—steer right, steer left, or continue straight—to avoid hitting a pedestrian that its cameras see in the road.</p>



<p>But what if there&#8217;s a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called &#8216;adversarial inputs,&#8217; it might take unnecessary and potentially dangerous action.</p>



<p>A new deep-learning algorithm developed by MIT researchers is designed to help machines navigate in the real, imperfect world, by building a healthy &#8216;skepticism&#8217; of the measurements and inputs they receive.</p>



<p>The team combined a reinforcement-learning algorithm with a deep neural network, both used separately to train computers in playing video games like Go and chess, to build an approach they call CARRL, for Certified Adversarial Robustness for Deep Reinforcement Learning.</p>



<p>The researchers tested the approach in several scenarios, including a simulated collision-avoidance test and the video game Pong, and found that CARRL performed better—avoiding collisions and winning more Pong games—over standard machine-learning techniques, even in the face of uncertain, adversarial inputs.</p>



<p>&#8220;You often think of an adversary being someone who&#8217;s hacking your computer, but it could also just be that your sensors are not great, or your measurements aren&#8217;t perfect, which is often the case,&#8221; says Michael Everett, a postdoc in MIT&#8217;s Department of Aeronautics and Astronautics (AeroAstro). &#8220;Our approach helps to account for that imperfection and make a safe decision. In any safety-critical domain, this is an important approach to be thinking about.&#8221;</p>



<p>Everett is the lead author of a study outlining the new approach, which appears in IEEE&#8217;s <em>Transactions on Neural Networks and Learning Systems</em>. The study originated from MIT Ph.D. student Björn Lütjens&#8217; master&#8217;s thesis and was advised by MIT AeroAstro Professor Jonathan How.</p>



<p><strong>Possible realities</strong></p>



<p>To make AI systems robust against adversarial inputs, researchers have tried implementing defenses for supervised learning. Traditionally, a neural network is trained to associate specific labels or actions with given inputs. For instance, a neural network that is fed thousands of images labeled as cats, along with images labeled as houses and hot dogs, should correctly label a new image as a cat.</p>



<p>In robust AI systems, the same supervised-learning techniques could be tested with many slightly altered versions of the image. If the network lands on the same label—cat—for every image, there&#8217;s a good chance that, altered or not, the image is indeed of a cat, and the network is robust to any adversarial influence.</p>



<p>But running through every possible image alteration is computationally exhaustive and difficult to apply successfully to time-sensitive tasks such as collision avoidance. Furthermore, existing methods also don&#8217;t identify what label to use, or what action to take, if the network is less robust and labels some altered cat images as a house or a hotdog.</p>



<p>&#8220;In order to use neural networks in safety-critical scenarios, we had to find out how to take real-time decisions based on worst-case assumptions on these possible realities,&#8221; Lütjens says.</p>



<p><strong>The best reward</strong></p>



<p>The team instead looked to build on reinforcement learning, another form of machine learning that does not require associating labeled inputs with outputs, but rather aims to reinforce certain actions in response to certain inputs, based on a resulting reward. This approach is typically used to train computers to play and win games such as chess and Go.</p>



<p>Reinforcement learning has mostly been applied to situations where inputs are assumed to be true. Everett and his colleagues say they are the first to bring &#8220;certifiable robustness&#8221; to uncertain, adversarial inputs in reinforcement learning.</p>



<p>Their approach, CARRL, uses an existing deep-reinforcement-learning algorithm to train a deep Q-network, or DQN—a neural network with multiple layers that ultimately associates an input with a Q value, or level of reward.</p>



<p>The approach takes an input, such as an image with a single dot, and considers an adversarial influence, or a region around the dot where it actually might be instead. Every possible position of the dot within this region is fed through a DQN to find an associated action that would result in the most optimal worst-case reward, based on a technique developed by recent MIT graduate student Tsui-Wei &#8220;Lily&#8221; Weng Ph.D. &#8217;20.</p>



<p><strong>An adversarial world</strong></p>



<p>In tests with the video game Pong, in which two players operate paddles on either side of a screen to pass a ball back and forth, the researchers introduced an &#8220;adversary&#8221; that pulled the ball slightly further down than it actually was. They found that CARRL won more games than standard techniques, as the adversary&#8217;s influence grew.</p>



<p>&#8220;If we know that a measurement shouldn&#8217;t be trusted exactly, and the ball could be anywhere within a certain region, then our approach tells the computer that it should put the paddle in the middle of that region, to make sure we hit the ball even in the worst-case deviation,&#8221; Everett says.</p>



<p>The method was similarly robust in tests of collision avoidance, where the team simulated a blue and an orange agent attempting to switch positions without colliding. As the team perturbed the orange agent&#8217;s observation of the blue agent&#8217;s position, CARRL steered the orange agent around the other agent, taking a wider berth as the adversary grew stronger, and the blue agent&#8217;s position became more uncertain.</p>



<p>There did come a point when CARRL became too conservative, causing the orange agent to assume the other agent could be anywhere in its vicinity, and in response completely avoid its destination. This extreme conservatism is useful, Everett says, because researchers can then use it as a limit to tune the algorithm&#8217;s robustness. For instance, the algorithm might consider a smaller deviation, or region of uncertainty, that would still allow an agent to achieve a high reward and reach its destination.</p>



<p>In addition to overcoming imperfect sensors, Everett says CARRL may be a start to helping robots safely handle unpredictable interactions in the real world.</p>



<p>&#8220;People can be adversarial, like getting in front of a robot to block its sensors, or interacting with them, not necessarily with the best intentions,&#8221; Everett says. &#8220;How can a robot think of all the things people might try to do, and try to avoid them? What sort of adversarial models do we want to defend against? That&#8217;s something we&#8217;re thinking about how to do.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Designing and evaluating medical deep learning systems</title>
		<link>https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/</link>
					<comments>https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 06 Feb 2021 05:10:50 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Designing]]></category>
		<category><![CDATA[evaluating]]></category>
		<category><![CDATA[Medical]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12740</guid>

					<description><![CDATA[<p>Source &#8211; https://medicalxpress.com/ Can better design of deep learning studies lead to the faster transformation of medical practices? According to the authors of &#8220;Designing deep learning studies in cancer diagnostics,&#8221; published in Nature Reviews Cancer&#8216;s latest issue, the answer is yes. &#8220;We propose several protocol items that should be defined before evaluating the external cohort&#8221; says <a class="read-more-link" href="https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/">Designing and evaluating medical deep learning systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://medicalxpress.com/</p>



<p>Can better design of deep learning studies lead to the faster transformation of medical practices? According to the authors of &#8220;Designing deep learning studies in cancer diagnostics,&#8221; published in <em>Nature Reviews Cancer</em>&#8216;s latest issue, the answer is yes.</p>



<p>&#8220;We propose several protocol items that should be defined before evaluating the external cohort&#8221; says first author Andreas Kleppe at the Institute for Cancer Diagnostics and Informatics at Oslo University Hospital.&#8221;</p>



<p>&#8220;In this way, the evaluation becomes rigorous and more reliable. Such evaluations would make it much clearer which systems are likely to work well in clinical practice, and these systems should be further assessed in phase III randomized clinical trials.&#8221;</p>



<p>Slow implementation is partly a natural consequence of the time needed to evaluate and adapt systems affecting patient treatment. However, many studies assessing well-functioning systems are at high risk of bias.</p>



<p>According to Kleppe, even among the seemingly best studies that evaluate external cohorts, few predefine the primary analysis. Adaptations of the deep learning system, patient selection or analysis methodology can make the results presented over-optimistic.</p>



<p>The frequent lack of stringent evaluation of external data is of particular concern. Some systems are developed or evaluated on too narrow or inappropriate data for the intended medical setting. The lack of a well-established sequence of evaluation steps for converting promising prototypes into properly evaluated medical systems limits deep learning systems&#8217; medical utilization.</p>



<p><strong>Millions of adjustable parameters</strong></p>



<p>Deep learning facilitates utilization of large data sets through direct learning of correlations between raw input data and target output, providing systems that may use intricate structures in high-dimensional input data to model the association with the target output accurately. Whereas supervised machine learning techniques traditionally utilized carefully selected representations of the input data to predict the target output, modern deep learning techniques use highly flexible artificial neural networks to correlate input data directly to the target outputs.</p>



<p>The relations learnt by such direct correlation will often be true but may sometimes be spurious phenomena exclusive to the data utilized for learning. The millions of adjustable parameters make deep neural networks capable of performing correctly in training sets even when the target outputs are randomly generated and, therefore, utterly meaningless.</p>



<p><strong>Design and evaluation challenges</strong></p>



<p>The high capacity of neural networks induces severe challenges for designing and developing deep learning systems and validating their performance in the intended medical setting. An adequate clinical performance will only be possible if the system has good generalisability to subjects not included in the training data.</p>



<p>The design challenges involve selecting appropriate training data, such as representativeness of the target population. It also includes modeling questions such as how the variation of training data may be artificially increased without jeopardizing the relationship between input data and target outputs in the training data.</p>



<p>The validation challenge includes verifying that the system generalizes well. For example, does it perform satisfactorily when evaluated on relevant patient populations at new locations and when input data are obtained using differing laboratory procedures or alternative equipment? Moreover, deep learning systems are typically developed iteratively, with repeated testing and various selection processes that may bias results. Similar selection issues have been recognized as a general concern for the medical literature for many years.</p>



<p>Thus, when selecting design and validation processes for diagnostic deep learning systems, one should focus on the generalization challenges and prevent more classical pitfalls in data analysis.</p>



<p>&#8220;To achieve good performance for new patients, it is crucial to use various training data. Natural variation is always essential, but so is introducing artificial variation. These types of variation complement each other and facilitate good generalisability,&#8221; says Kleppe.</p>
<p>The post <a href="https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/">Designing and evaluating medical deep learning systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/designing-and-evaluating-medical-deep-learning-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A.I. For Smarter Factories – The World of Industrial Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 05 Sep 2020 07:57:11 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Computer assistants]]></category>
		<category><![CDATA[systems]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11396</guid>

					<description><![CDATA[<p>Source: metrology.news. As the digital age moves forward, it’s becoming impossible to avoid interacting with artificial intelligence (AI) systems. Computer assistants and AIs perform an ever-growing range of tasks that are broadly intended to improve our quality of life. This extends to industry as well. But first, what do we mean by artificial intelligence? In <a class="read-more-link" href="https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/">A.I. For Smarter Factories – The World of Industrial Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: metrology.news.</p>



<p>As the digital age moves forward, it’s becoming impossible to avoid interacting with artificial intelligence (AI) systems. Computer assistants and AIs perform an ever-growing range of tasks that are broadly intended to improve our quality of life. This extends to industry as well.</p>



<p>But first, what do we mean by artificial intelligence? In simple terms, it’s any machine (usually a computer) that does things normally associated with human intelligence, such as reasoning, learning and self-improvement.</p>



<p>AI systems in industry are the same technologies you use in daily life but applied to industrial problems. The same kind of AI that makes our phone calls clearer can listen for bad blades in a sawmill. Programs built with AI like those that help us find new movies and music suited to our unique preferences can help guide designers to selecting the right materials to mix to make the perfect concrete for the job. The same math behind teaching a toy dog to walk helps manufacturing facilities plan and schedule maintenance well into the future!</p>



<p>When these tools and algorithms target problems in physical (non-digital) industries, they fall into the special realm of industrial AI, or IAI. The many unique needs and challenges of industry set these algorithms apart from their more broadly used counterparts. Specific industries even have special names for the adoption of IAI technologies. For example, manufacturing engineers use terms such as Industry 4.0 and “smart manufacturing.” These all reflect the growing adoption and application of AI to problems previously thought unable to be automated.</p>



<p>So how can the same technologies be applied to such vastly different problems and still get good results? By understanding not only the tool, but also the problem faced and the environment where it will be used!</p>



<p>Generally, IAI is applied to tasks that are tedious, time-consuming or simply too difficult for humans to accomplish. The goal of IAI, like any tool, is making both worker and facility more productive. As part of this, ongoing efforts at the National Institute of Standards and Technology (NIST) aim to educate and guide users towards selecting the right IAI tool for the right job.</p>



<p>In broadest terms, IAI tools fall into two categories: predefined<em>&nbsp;rules-based</em>&nbsp;tools and&nbsp;<em>machine learning&nbsp;</em>tools. Some tools use combinations or hybrids of these two groups, such as reinforcement learning, but most IAI tools fit one of these descriptions.</p>



<p><strong>Following The Rules</strong></p>



<p>Rules-based AI operates strictly on predefined rules and requirements set during its creation. These AI tools are generally easier for humans to understand, both during creation and operation. These rely on equations or sets of “if-then”-type rules that tell the machine what to do. In their purest form, these AI tools tend not to change after creation. This makes them very stable and makes it easier to know why they did what they did during operations. This type of IAI is often so simplistic in its creation and execution that some people forget it is considered AI. However, the seemingly simple ideas and methods of rule-based IAI can build up to incredibly complex and sophisticated systems.</p>



<p>Rules-based IAI tools are ideal for well-understood processes or environments that allow a small set of possible outcomes. Simple decision-making processes or systems that can be sufficiently modeled with simple equations represent typical applications of these tools. A simple rules-based decision engine could measure and reject machined shafts that are too long or short with very basic “if-then” rules. Another example of rules-based IAI uses equations about the physical properties of spinning equipment to identify tiny cracks in bearings.</p>



<p>Their ease of understanding and stability also have one unavoidable drawback. The designer must know and anticipate the system where the tool will operate. Because of this, the IAI will ultimately be limited by the knowledge and capabilities of the team who made it.</p>



<p><strong>Learning From Mistakes</strong></p>



<p>This brings us to the second major category of AI, machine learning algorithms. This is what most people think of when they hear the term AI.</p>



<p>Machine learning algorithms are the class of AI that learn and adapt from the inputs they receive from the environment. A user does not need to directly dictate the behavior of the program. Instead the information it receives “teaches” the algorithm the correct output based on some reward scheme that helps it know good responses from bad.</p>



<p>Machine learning often finds use in situations and problems with lots of data because it needs examples and trials to determine correct behavior. The more versatile machine learning tools do not always need to know the specific equipment or system they will be applied to during development.&nbsp;Many developers and end-users assume that&nbsp;the tool can learn to perform its job either in the field or from historic observations.</p>



<p>In industry, many equipment condition monitoring systems use machine learning to learn and recognize patterns of equipment behavior, then alert if this pattern changes. Many IAI tools are ultimately fancy “pattern learning” devices.</p>



<p>Not every job can be solved by machine learning. Often machine learning tools are misapplied, misinterpreted or simply do not work within the limitations of the job. Good performance of any IAI tool requires certain conditions, especially for those built with machine learning. Data problems, job misspecification, lack of computing power and even operator error can all cause poor results.</p>



<p>Even though this may sound intimidating, remember that AI systems are made of mathematical models that perform many of the same operations we learned about in high-school math and science. The general principles that govern all good science and math still apply when dealing with AI. The AI tools should behave in repeatable, consistent ways that are independently verifiable.</p>



<p><strong>Testing Industrial AI</strong></p>



<p>NIST, along with external partners, is developing testing methods and metrics to help industry better pick out useful AI tools from ‘bad.’ NIST are working with groups to further the science of metrology for IAI by perfecting how to test an AI in a way relevant to both the environment and the intended users.</p>



<p>When it comes to IAI, sometimes knowing what to measure is just as important as knowing how to measure it. For example, one ongoing effort helps companies measure the return on investment from using AI-based tools to evaluate production process performance based on product quality. This work looks at the risks and rewards of AI in terms of direct impact on safety and earnings. Measuring its value in relatable terms can help decision makers better understand the impact of the AI system before investing.</p>



<p>Good AI also needs good data. Data quality during training, testing and operations has an enormous impact on the performance of any AI system. Few qualified industrial datasets exist publicly. NIST provides open access factory simulators and workshop test bed data. But as new tools develop, the need for more data also grows.</p>



<p>Other work concerning data aims at helping guide companies to properly collect and curate their data. How you collect data has a strong effect on what can be done with that data. Many experiments directed by NIST are exploring the possibilities and limits of industrial data, including sources traditionally underused. A major effort at NIST examines the gathering and use of natural language from industrial documents such as maintenance logs or reports. Many documents with written words are hard to process with standard AI tools. Specialized IAI tools for processing natural language are also in development at NIST.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/">A.I. For Smarter Factories – The World of Industrial Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-i-for-smarter-factories-the-world-of-industrial-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</title>
		<link>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/</link>
					<comments>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 05 Sep 2020 07:18:59 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[DAS]]></category>
		<category><![CDATA[data & analytics]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[NERSC]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11386</guid>

					<description><![CDATA[<p>Source: phys.org The role of deep learning in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area for physics-informed deep learning that can more effectively identify nonlinear relationships in large datasets, extract patterns, emulate complex physical processes, and build predictive models. &#8220;Deep learning has had <a class="read-more-link" href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: phys.org</p>



<p>The role of deep learning in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area for physics-informed deep learning that can more effectively identify nonlinear relationships in large datasets, extract patterns, emulate complex physical processes, and build predictive models.</p>



<p>&#8220;Deep learning has had unprecedented success in some very challenging problems, but scientists want to understand exactly how these models work and why they do the things they do,&#8221; said Karthik Kashinath, a computer scientist and engineer in the Data &amp; Analytics Services Group (DAS) at the National Energy Research Scientific Computing Center (NERSC) who has been deeply involved in NERSC&#8217;s research and education efforts in this area. &#8220;A key goal of deep learning for science is how do you design and train a neural network so that it can capture accurately the complexity of the processes it seeks to model, emulate, or predict, and we&#8217;re developing ways to infuse physics and domain knowledge into these neural networks so that they obey the laws of nature and their results are explainable, robust, and trustworthy.&#8221;</p>



<p>We caught up with Kashinath following the Artificial Intelligence for Earth System Science (AI4ESS) Summer School, a week-long virtual event hosted in June by the National Center for Atmospheric Research (NCAR) and the University Corporation for Atmospheric Research (UCAR) that was attended by more than 2,400 researchers from around the world. Kashinath was involved in organizing and presenting at the event, along with David John Gagne and Rich Loft of NCAR. Much of Kashinath&#8217;s current research focuses on the application of deep learning methods to climate and Earth systems modeling.</p>



<p><strong>How are deep learning methodologies being adopted in weather, climate, and Earth systems research?</strong></p>



<p>In recent years we&#8217;ve seen a significant rise in the use of deep learning in science, not just in augmenting, enhancing or replacing existing methods, but also for discovering new science in physics, chemistry, biology, medicine, and more – discoveries that were nearly impossible with traditional statistical methods. We are now starting to see the same in the Earth sciences, with the number of publications in journals like <em>Geophysical Research Letters</em> and Nature Geoscience rising and scientific conferences now featuring entire tracks involving machine and deep learning.</p>



<p><strong>What does deep learning bring to the table?</strong></p>



<p>It is extremely powerful in pattern recognition and discovering very complex nonlinear relationships that exist in large datasets, both of which are critical for developing models of Earth science systems. The key goal of a weather or climate modeler is to understand the ways in which processes in nature operate and to model them in an effective manner so we can predict the future of climate change and extreme weather events. Deep learning offers new methods for using existing data to understand how these processes operate and to develop models for them that are not only accurate and effective but also computationally much faster than traditional methods. Traditionally, climate and weather models solve large systems of coupled nonlinear partial differential equations, which is extremely computationally intensive. Deep learning is starting to augment, enhance, or even replace parts of these models with very efficient and fast physical process emulators. And that&#8217;s a significant step forward.</p>



<p>Pattern recognition is another area where deep learning is influencing Earth systems research. The DAS group at NERSC has been pushing hard on pattern recognition for detecting and tracking weather and climate patterns in large datasets. The 2018 Gordon Bell prize for exascale climate analytics using deep learning testifies to our contributions in that area. Given that we already have petabytes of climate data and that it is increasing at a crazy rate, it is physically impossible to sift through and recognize the key features and patterns using traditional statistical approaches. Deep learning offers very fast ways to mine that data and extract useful information such as extreme weather patterns.</p>



<p>A third area is downscaling; that is, given a low-resolution dataset, how do you produce very high-resolution data that is necessary for things like planning, especially on regional and local scales? Part of the grand challenge of climate science is how to build very high-resolution models that are accurate and produce data that we can reliably work with. One way to attack the problem is to say okay, we know these models are extremely expensive, and in the foreseeable future – even with computing getter faster and better – we&#8217;re really not going to be able to build reliable global climate models at a spatial resolution of 1 km or finer. So if we can create a deep learning model that takes low-resolution climate data and produces high-resolution data that is physically meaningful, reliable, and accurate – that is a game changer.</p>



<p><strong>What is a grand challenge for deep learning applied to Earth system science?</strong></p>



<p>I come from a background in fluid dynamics, where modeling turbulence is a long-standing grand challenge. A similar challenge in the atmospheric sciences is modeling clouds. All climate models have parameterizations – components in the climate model that describe how various physical processes behave and interact with each other. In the atmosphere that includes how clouds form, how radiation works, when and where precipitation happens, etc. Cloud modeling is also known to be the largest source of uncertainty in climate model projections, and for decades one of the big challenges has been how to reduce the uncertainty. Models have become much more complex and capture many more physical phenomena, but they still have large uncertainties in their predictions. So one area where deep learning could have a significant impact is to help us build better emulators of atmospheric processes like clouds, with the goal of reducing the uncertainties in predictions. That is a very concrete scientific goal.<br><strong><br>As you look ahead, what are you most excited about in terms of the impact of deep learning on climate and Earth systems research?</strong></p>



<p>The major pushback we&#8217;ve had from the scientific community is that neural networks are black boxes that are hard to understand and interpret, and scientists obviously would like to understand exactly how these neural networks work and why they do the things they do. So one thing I&#8217;m really excited about is developing better ways to interpret and understand these networks and incorporate the knowledge that we have about the physics of the Earth system into these models so they are more robust, reliable, trustworthy, interpretable, explainable, and transparent. The goal is to convince ourselves that these models are behaving in ways that respect the physics of nature, are effectively using the domain knowledge that we have, and are making predictions that we can trust. I was invited to submit a paper to Proceedings of the Royal Society on exactly this topic, &#8220;Physics-informed Deep Learning for Weather and Climate Modeling,&#8221; which is now under review.</p>



<p>I&#8217;m also excited about proving, in operation, that these deep learning models provide the computational speedup we claim they will provide when we embed them into a large climate or weather model. For example, the European Weather Forecasting Center has started to replace some parts of its weather forecasting model with machine and deep learning models, and they are already starting to see benefits. In the U.S., NCAR and the National Oceanic and Atmospheric Administration are also starting to replace parts of their climate and weather models with machine learning and deep learning models, and a number of academic and industry-based research groups are working on related projects. Chris Bretherton, one of the world&#8217;s leading climate scientists, heads a group at the University of Washington that is working to replace some of the complicated cloud processes in these large climate models with deep learning methods. So I&#8217;m looking forward to seeing their results in a year or two on speedup and performance.</p>



<p><strong>What was the focus of the AI4ESS event, and why was it so well-attended?</strong></p>



<p>The Artificial Intelligence for Earth System Science (AI4ESS) Summer School focused on how attendees can strengthen their background in statistics and machine learning, learn the fundamentals of deep learning and neural networks, and learn how to use these for challenging problems in the Earth system sciences. We had an overwhelming response to the school – it was supposed to be an in-person event in Boulder, Colo., with a capacity of 80 students. But once it went virtual, we had 2,400 attendees from 40 countries across the globe. It was live-streamed through UCAR and they tracked the daily log-ins.</p>



<p>There was great participation throughout the week. We had invited speakers every day – three lectures a day, so 15 lectures over the week – with experts from machine learning, deep learning, and the Earth sciences. Each day there was also a panel discussion for 30 minutes over lunch, and for me, these were super exciting because all of these experts were discussing and debating about the challenges and opportunities of using machine learning and deep learning for Earth system science. The school also held a week-long hackathon, where teams of six each chose a project from six different problems to work on for the week. About 500 people participated in the hackathon, with a lot of collaboration and interaction, including individual Slack channels for each of the hackathon teams. There were also Slack channels for the entire week of the summer school on various things: lecture-related Q&amp;As, hackathon challenge problems, technical tips and tricks in machine learning and deep learning, etc. So there was a lot of Slack activity going on, with people exchanging ideas, sharing results, and so forth.</p>



<p><strong>Why is everyone so keen on learning this stuff?</strong></p>



<p>I think the community, especially the younger scientists, see that deep learning can be a game changer in science and they don&#8217;t want to be left behind. They believe that it is going to be mainstream soon and that it is going to be essential for doing science. That&#8217;s the main motivator. So AI4ESS focused on teaching the fundamentals and laying the groundwork for them to begin applying machine and deep learning successfully to their research.</p>
<p>The post <a href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why are Artificial Intelligence systems biased?</title>
		<link>https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/</link>
					<comments>https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 13 Jul 2020 06:45:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automated]]></category>
		<category><![CDATA[machine-learned]]></category>
		<category><![CDATA[systems]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10146</guid>

					<description><![CDATA[<p>Source: thehill.com A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an “investor” must be a male. A celebrated natural language generator called GPT, with <a class="read-more-link" href="https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/">Why are Artificial Intelligence systems biased?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thehill.com</p>



<p>A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an “investor” must be a male.</p>



<p>A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn’t seem to like female candidates.</p>



<p>Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color. Another commercial face-recognition technology that Amazon tried to sell to government agencies has been shown to have significantly higher error rates for minorities. And a popular selfie lens by Snapchat appears to “whiten” people’s faces, apparently to make them more attractive.</p>



<p>These are not just academic curiosities. Broward County’s recidivism system, while supposedly only one of several factors judges were to consider, was shown to have a substantial impact. Just recently, we learned of the first false arrest of an African American based largely on a facial-recognition system.</p>



<p>Even as these embedded biases are discovered, new ones come up.</p>



<p>Perhaps the most egregious are what may be called “mugshot AI,” which claims to unearth useful patterns from physiognomic characteristics. From phrenology to palmistry, pseudosciences that claim to tell personality and mental states from physical characteristics are nothing new. AI’s newfound ability to process, recognize or find patterns from large-scale physiognomic data has, however, given a dubious new lease to these dubious undertakings. Various companies claim to discern personality characteristics, including criminality, from mugshots or to speed up recruitment by analyzing job candidates from online video interviews.&nbsp;Indeed, there is a tremendous temptation to look for some arbitrary correlational mapping from one high-dimensional object — a person’s face, voice, posture — to another critical decision variable, given enough data.</p>



<p>Of course, bias existed before the advent of AI systems.&nbsp;Human decision-makers, from law enforcement to employment agencies, have been known to act on internal biases. One saving grace is that there is variance in individual human biases, which works to reduce their macro harm; not all humans have the same difficulty in distinguishing between nonwhite faces, for example.</p>



<p>Yet, bias internalized in a widely deployed AI system can be insidious — precisely because a single set of biases become institutionalized with little variance.</p>



<p>The situation is further exacerbated by our well-known automation bias, which makes us subconsciously give greater weight to machine decisions.&nbsp;</p>



<p>Reining in inadvertent amplification of societal biases thus has become one of the most urgent tasks in managing the risks of data-driven AI technology.</p>



<p>So why do&nbsp;AI systems exhibit racist or sexist biases? Are people in commercial AI labs deliberately writing biased algorithms&nbsp;or training systems on deliberately biased data? Turns out that the offending behavior is most often learned than designed, and most of these systems have been trained on readily available public data, often gleaned from the web.</p>



<p>A critical catalyst for the recent successes of AI has been the automatically captured digital footprints of our lives and quotidian interactions. This allowed image-recognition systems to be trained on troves of pictures (often with labels) that we collectively upload onto the web, and natural language systems to be trained on the enormous body of language captured on the web — from Reddit to Wikipedia — through our daily interactions.</p>



<p>Indeed, the web and internet have become a repository of our Jungian collective subconscious — and a convenient way to train AI systems. A problem with the collective subconscious is that it is often raw, unwashed and rife with prejudices; an AI system trained on it, not surprisingly, winds up learning these and, when deployed at scale,&nbsp;can unwittingly exacerbate existing biases.</p>



<p>In other words, although it is no longer socially acceptable to admit to racist or sexist views, such views — and their consequences — often are still implicit in our collective behavior and captured in our digital footprints. Modern data-driven AI systems can unwittingly learn these biases, even if we didn’t quite intend them to.</p>



<p>AI systems trained on such biased data not only are used in predictive decision-making (policing, job interviews, etc.), but also to generate or finish incompletely specified data (e.g., to improve a low-resolution picture by upsampling it). This generation phase can itself be a vehicle for further propagation of biases. It shouldn’t come as a surprise, for example, that a system trained on images of engineering faculty members will more readily imagine a male face than a female one. The fact that machine-learning systems are limited to capturing correlational patterns in the data — and that some correlations may result from ingrained inequitable societal structures — means societal biases can seep in, despite well-intentioned design.</p>



<p>Increasingly,&nbsp;designers are combating societal biases in AI systems. First and foremost is curating the training data. Unlike traditional disciplines, like statistics, that paid significant attention to data collection strategies, progress in AI came mostly from exploiting the copious data available on the web.&nbsp;Unfortunately, that readily available data often is asymmetric and doesn’t have sufficient diversity — Western societies, for example, tend to have a larger digital footprint than others. Such asymmetries, in turn, lead to the kinds of asymmetric failures observed in gender-detection systems. Some obvious ideas for curation, such as “blinding” the learning system to certain sensitive attributes such as gender and race, have been shown to be of limited effectiveness.</p>



<p>The other issue of using readily available data is that it is often rife with hidden biases. For example, as much as there is temptation to train large-scale language generation systems on the profusion of text on the web, it is not surprising&nbsp;that a lot of this user-generated text on forums that allow anonymous postings can be rife with biases. This explains to a large extent the types of biased text completions observed in some state-of-the-art language-generation systems. There is a growing understanding that training data must be carefully vetted. Such steps may increase the costs of — and severely reduce — the data available for training. Nevertheless, given the insidious societal costs of using uncurated data, we must be ready to bear those costs.</p>



<p>Some also have advocated explicitly “de-biasing” the data (e.g., by balancing the classes in the training samples). While a tempting solution,&nbsp;such steps in essence correspond to a form of social engineering — in this case, of data. If there is social engineering to be done, it seems much better for society to do it at the societal level, rather than just by AI developers.</p>



<p>Part of the challenge in controlling harmful societal biases in today’s AI&nbsp;systems is that most of them are largely data-driven, and typically do not take any explicit knowledge as input.&nbsp;Given that explicit knowledge is often the most natural way to state societal norms and mores, there are efforts to effectively infuse&nbsp;explicit knowledge into data-driven predictive systems.</p>



<p>Another proactive step is looking more carefully at what is being optimized by learning systems. Most systems focus on optimizing the accuracy of a predictive system. It is, however, possible for a system that had high overall accuracy to still have bad performance on certain minority classes. More generally, there is increasing recognition that the degree of egregiousness in misclassifications must be considered — after all, confusing apples with oranges is less egregious than confusing humans with animals. The prohibitive costs of false positives in some applications (e.g., face recognition in predictive policing) might&nbsp;caution a civilized society that, in some cases, predictive systems&nbsp;based on correlational patterns should be avoided, despite their seemingly high accuracy.</p>



<p>As AI technology matures and becomes widely deployed, there is increased awareness — in the research community, companies, governments and society — of the importance of considering its impacts. There are now premier academic conferences devoted to scholarly understanding of the impact of AI technology in exacerbating societal biases; increasingly, AI publications ask for an explicit discussion of the broader impacts of technical work. Alerted by ongoing research, companies such as IBM, Amazon and Microsoft are declaring moratoriums on the sale of technologies such as face recognition, pending greater understanding of their impacts. Several U.S. cities have banned or suspended facial-recognition technology in policing.</p>



<p>There is, of course, no magic bullet for&nbsp;removing societal bias from AI systems. The only way to make sure fair learning can happen from the digital&nbsp;traces of our lives is to&nbsp;actually lead fair lives, however tall an order that might be.</p>



<p>But we also should acknowledge that these systems, rightly used, can hold a mirror up to society.&nbsp;Just as television brought racial injustices into our living rooms during the 1960s’ civil rights movement and helped change us for the better, AI systems based on our digital footprints can help show us ourselves and, thus, be a force for our betterment.</p>



<p>Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He was president of the Association for the Advancement of Artificial Intelligence and helped start the Conference on AI, Ethics and Society. He was also a founding board member of Partnership on AI. </p>
<p>The post <a href="https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/">Why are Artificial Intelligence systems biased?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-are-artificial-intelligence-systems-biased/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How APIs helped NSW Health Pathology respond to COVID-19</title>
		<link>https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/</link>
					<comments>https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 29 Jun 2020 05:59:48 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[NSW]]></category>
		<category><![CDATA[Pathology]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9815</guid>

					<description><![CDATA[<p>Source: itnews.com.au NSW Health Pathology relied on its investment in API-led connectivity over the past four years to rapidly build out “world-class” public facing services in response to the coronavirus pandemic. Enterprise architect Tim Eckersley told the MuleSoft CONNECT digital summit last week the agency was able to move with speed in the early stages <a class="read-more-link" href="https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/">How APIs helped NSW Health Pathology respond to COVID-19</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itnews.com.au</p>



<p>NSW Health Pathology relied on its investment in API-led connectivity over the past four years to rapidly build out “world-class” public facing services in response to the coronavirus pandemic.</p>



<p>Enterprise architect Tim Eckersley told the MuleSoft CONNECT digital summit last week the agency was able to move with speed in the early stages of COVID-19 thanks to its “large library of healthcare microservices”.</p>



<p>The “library of API-led microservices” has been developed over the past four years to allow “seamless integration between a very broad range of healthcare systems”.</p>



<p>He said that “each wave of delivery” had built up a “groundswell of microservices” which &#8211; although not always reusable straight away, “the reusable components gradually take a much more dominant posture and provide a really solid launching place to have this rapid response”.</p>



<p>“In terms of what we’ve been able to achieve with MuleSoft, we’ve used it to integrate our four laboratory information systems, which are our core systems of record in the background, with the greater health system,” Eckersley said.</p>



<p>“So that’s the eMRs [electronic medical records] or the eHRs [electronic health records], depending on if you&#8217;re in Australia or the United States, as well as the outpatients administration systems.”</p>



<p>“But then also tie those [systems] together with our federal systems, so things like the My Health Record and the national cancer screening registry.”</p>



<p>Eckersley, who heads up the agency’s DevOps group, said the architectural approach had allowed the agency, which is the largest public provider of pathology in Australia, to stand up a text bot to deliver COVID-19 test results to patients in as little as two weeks.</p>



<p>The automated citizen-facing service, which was developed in the first weeks of the pandemic in partnership with AWS, Deloitte, Microsoft, resulted in a drastic reduction in hours &#8211; or the equivalent of returning “5000 days of effort back to clinical frontline staff”.</p>



<p>He said the “world-class service” &#8211; which returns a test result in less than 24 hours, several days faster than in other parts of the world &#8211; was initially piloted with several clinics, before being “rapidly rolled out across the state”.</p>



<p>“All [patients need to do when they go to get a nasal swab taken] is scan a QR code and it immediately pops open a text message of ‘what are my results?’ to our text bot service,” Eckersley said.</p>



<p>“And then that text bot requests that [the patient] put in identifying information, as well as the date their collection was taken, and it will instantly give them the results as soon as they become available.”</p>



<p>The bot integrates with a range of different healthcare systems, including “three Cerner instances” and Auslab, as well as a “Jira service desk that we’ve been able to automate the ticket creation, which allows us to be able to automate the ticket creation”.</p>



<p>“[This] allows us to pick up any of these edge cases which don&#8217;t automatically match and push out, and that allows us to keep that ceiling on patient notifications right down to within that three day window,” he said.</p>



<p>“And we&#8217;ve also been able to rapidly expand out the different audiences … to repackage that information out to different consumers, so that&#8217;s enabled us to leverage our developments to feed out the information to public health and our Service NSW partners.&nbsp;</p>



<p>“But also build our agent portal, which really provides a fantastic mechanism for our call centre agents to be able to have a great discussion with patients who haven’t been able to be contacted automatically and figure out what went wrong.”</p>



<p>However, Eckersley said that without the extensive work around building up a library of microservices over the prior four years, the agency would not have been in a position to “respond in as little as two weeks to get the initial service up and running.”</p>



<p>“The critical part of that is adapting [an] API-led pattern to the healthcare industry, and I think this is something that we’ve done relatively innovatively,” he said.</p>



<p>“[By] taking an HL7 message, using the MuleSoft HL7 adapters and then connecting it up with cloud infrastructure like Azure service bus for messaging, we’ve been able make a state-scaled solution really quickly which can pick up the millions of messages that we get running through the state in any given week and handle them in an API-led way.</p>



<p>“So we take that message in HL7, we convert it to XML, and then we push it through our process API layer.</p>



<p>&#8220;Then at that point, it is converted into a range of different FHIR [Fast Healthcare Interoperability Resources].</p>



<p>&#8220;[We&#8217;re] able to, once in those FHIR resources, leverage things like Cosmos, which is a NoSQL database at hyperscale,&nbsp;to be able to store that information and present a set of an experience APIs to things like our web and mobile apps, as well as our partners.</p>



<p>&#8220;In our case we’ve integrated with Service NSW, and our text bot of course”.</p>



<p>Eckersley said the agency was now in the process of shifting all of its MuleSoft services to Kubernetes “piece-by-piece, rather than taking a big bang approach”, which will allow the agency to reduce risk and prioritise what applications it moves.</p>



<p>Chief information officer James Patterson, who also spoke at MuleSoft CONNECT digital, said reusing as many components as possible had allowed the agency to avoid creating “technical debt”.</p>



<p>“Even where we’ve had things like a billing project that’s using MuleSoft integration to bring data from our legacy systems into our more modern systems, we’ve been able to pick up components of that previous project and reuse them to build these new services,” he said.</p>



<p>“Where we&#8217;ve had legacy, we’ve had to build things from scratch in our modern integration environment, and obviously that takes longer and takes more effort.</p>



<p>“So we&#8217;re creating a situation where we’re removing technical debt as we go through the crisis, and I think that’s been really centred around our strategy with Mulesoft.</p>



<p>“Where we’ve had nice modular reusable services that we’ve built in our environment, we’ve been able to use them straight away, in hours or days versus weeks to build something from scratch.&#8221;</p>



<p>He also said that the upheaval had forced NSW Health Pathology to adopt agile practices, where before the pandemic hit, the agency was only using Agile ways of working only 10 percent of the time.</p>



<p>“I think the opportunity is now there to introduce that way of working into all of our work or most of our work, which will really enhance the experience of our customers internally,” Patterson said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/">How APIs helped NSW Health Pathology respond to COVID-19</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-apis-helped-nsw-health-pathology-respond-to-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>No matter how sophisticated, artificial intelligence systems still need human oversight</title>
		<link>https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/</link>
					<comments>https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 May 2020 07:00:34 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[humans]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8843</guid>

					<description><![CDATA[<p>Source: zdnet.com Artificial intelligence and machine learning models can work spectacularly &#8212; until they don&#8217;t. Then they tend to fail spectacularly. That&#8217;s the lesson drawn from the COVID-19 crisis, as reported in MIT Technology Review. Sudden, dramatic shifts in consumer and B2B buying behavior are, as author Will Douglas Heaven put it, &#8220;causing hiccups for the algorithms <a class="read-more-link" href="https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/">No matter how sophisticated, artificial intelligence systems still need human oversight</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: zdnet.com</p>



<p>Artificial intelligence and machine learning models can work spectacularly &#8212; until they don&#8217;t. Then they tend to fail spectacularly. That&#8217;s the lesson drawn from the COVID-19 crisis, as reported in MIT Technology Review. Sudden, dramatic shifts in consumer and B2B buying behavior are, as author Will Douglas Heaven put it, &#8220;causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.&#8221;</p>



<p>Machine-learning models &#8220;are designed to respond to changes,&#8221; he continues. &#8220;But most are also fragile; they perform badly when input data differs too much from the data they were trained on. It is a mistake to assume you can set up an AI system and walk away.&#8221;&nbsp;</p>



<p>It&#8217;s evident, then, that we may be some ways off from completely self-managing systems, if ever. If this current situation tells us anything, it&#8217;s that human insights will always be an essential part of the AI and machine learning equation.&nbsp;</p>



<p>In recent months, I had been exploring the potential range of AI and machine learning with industry leaders, and what role humans need to play. Much of what I heard foreshadowed the COVID upheaval. &#8220;There is always the risk that the AI system makes bad assumptions, reducing performance or availability of the data,&#8221; says Jason Phippen, head of global product and solutions marketing at SUSE. &#8220;It is also possible that data derived from bad correlations and learning are used to make incorrect business or treatment decisions.  An even worse case would clearly be where the system is allowed to run free and it moves data to cold or cool storage that causes loss of life or limb.&#8221;   </p>



<p>AI and machine learning simply can&#8217;t be dropped into an existing infrastructure or set of processes. Chris Bergh, CEO of DataKitchen, cautions that existing systems need to be adapted and adjusted. &#8220;In traditional architecture, an AI and machine learning system consumes data environments to fulfill the data needs,&#8221; he says. &#8220;We need a slight change to that architecture by letting AI manage the data environment. This transition must be done smoothly in order to prevent catastrophic failures in the existing systems as well as to implement robust systems.&#8221;</p>



<p>AI and machine learning systems &#8220;being developed to manage data environments must be considered as mission-critical systems, and the development must be carried out very carefully,&#8221; Bergh continues. &#8220;Since data is the driving force of present-day business decisions, data environments will be the heart of the business. Therefore, even a slight failure in data management will incur a significant cost to the business by loss of operational time, other resources and user trust.&#8221;</p>



<p>Bergh also points to the &#8220;knowledge gaps of data professionals and AI and machine learning experts in the areas of AI and machine learning and data management, respectively.&#8221;&nbsp; &nbsp;</p>



<p>The bottom line is that skilled humans will always be key to managing the flow and assuring the quality and timeliness of data being fed into AI and machine learning systems. The mechanics of data management will be autonomous, but the context of the data needs human involvement. &#8220;We can look at examples like self-driving cars and data center energy optimization using DeepMind at Google and be fairly confident that there will eventually be some parallel opportunities in database management,&#8221; says Erik Brown, a senior director in the technology practice of West Monroe Partners, a business/technology advisory firm. &#8220;However, fully autonomous databases are likely a stretch in the near future; human involvement should become more strategic and focused in areas where humans are best equipped to spend their time.&#8221; </p>



<p>Fully autonomous data environments &#8220;will likely take many years to achieve,&#8221; agrees Jeremy Wortz, a senior architect in West Monroe&#8217;s technology practice. &#8220;Machine learning is far from solving complex wide problems. However, an approach that develops narrow and deep use cases will make a difference over time and will start the journey of a self-managing system. Most organizations can take this approach but will need to ensure they have a way to enumerate the narrow use cases, with the right tech and talent to realize these use cases.&#8221;</p>



<p>The more organizations depend on AI, the more humans will need to step up and oversee the data that is moving into these systems, as well as the insights that are being produced. Eighty percent or more of the effort in AI and machine learning &#8220;is often data sourcing, translation, validation and preparation for complex models,&#8221; says Brown. &#8220;As these models are informing more critical business use cases &#8212; fraud detection, patient lifecycle management &#8212; there will continue to be more demands on the stewards of that data.&#8221;</p>



<p>Few data environments outside of the Googles and Amazons of the world are truly ready, Brown says. &#8220;This is a huge opportunity for growth in most industries. The data is there, but collaborative, cross-functional organizational structures and flexible data pipelines aren&#8217;t ready to harness it effectively.&#8221;</p>



<p>One does not have to be a degreed data scientist to manage AI systems &#8212; what is needed is an interest in learning and leveraging new techniques. &#8220;AI-powered technology is fueling the citizen data scientist trend, which is a game-changer,&#8221; says Alan Porter, director of product marketing at Nuxeo. &#8220;In the past, these roles have required deep technical knowledge and coding skills. But with advances in technology &#8212; many of the tools and systems do the heavy technical lifting for you. It&#8217;s not as critical for people to fill these positions to have technical knowledge, instead organizations are looking for people who are more analytical with specific business expertise.&#8221; </p>



<p>While people with technical and coding skills will still play a critical role within organizations, Porter continues, &#8220;a big piece of the puzzle is now having analysts with specific business knowledge so they can interpret the information being gathered and understand how it fits into the big picture. Analysts also have to be good at communicating their findings to stakeholders outside the analytics team in order to effect change.&#8221;&nbsp; &nbsp;</p>



<p>In his MIT piece, Heaven concludes that &#8220;with everything connected, the impact of a pandemic has been felt far and wide, touching mechanisms that in more typical times remain hidden. If we are looking for a silver lining, then now is a time to take stock of those newly exposed systems and ask how they might be designed better, made more resilient. If machines are to be trusted, we need to watch over them.&#8221; Indeed.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/">No matter how sophisticated, artificial intelligence systems still need human oversight</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/no-matter-how-sophisticated-artificial-intelligence-systems-still-need-human-oversight/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Blindly using data to make decisions doesn’t create ethical AI systems</title>
		<link>https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/</link>
					<comments>https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 12 May 2020 09:29:35 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8721</guid>

					<description><![CDATA[<p>Source: themanufacturer.com It’s no secret that decisions made by artificial intelligence (AI) systems will increasingly impact our lives both professionally and personally, bringing a multitude of benefits to the way we live our lives. However, big decisions often come with an ethical price tag. For example, take AI in the Human Resources (HR) field. Many <a class="read-more-link" href="https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/">Blindly using data to make decisions doesn’t create ethical AI systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: themanufacturer.com</p>



<p>It’s no secret that decisions made by artificial intelligence (AI) systems will increasingly impact our lives both professionally and personally, bringing a multitude of benefits to the way we live our lives. However, big decisions often come with an ethical price tag.</p>



<p>For example, take AI in the Human Resources (HR) field. Many manufacturing businesses are starting to use AI and machine learning tools to screen the hundreds if not thousands of CVs they receive when hiring new employees.</p>



<p>To efficiently manage these applications, companies need to save time and human effort while also finding qualified and desirable candidates to fill the role.</p>



<p>However, even the best trained AI system will have its flaws. Not because it wants to, but because it’s been trained to – by us by feeding historical data.</p>



<p>For example, a company has advertised a vacancy for a shop floor assistant at one of its plants. Historical data suggests that the large majority of people undertaking this role are male.</p>



<p>While developing its learning capabilities, the AI is likely to only engage or respond to male applicants, therefore female applicants have a higher chance of missing out on the position. While not a manufacturer, this scenario has been demonstrated in the case against Amazon, whose AI-based tool used for part of its HR process was discriminating against women.</p>



<p>As a general technology, AI can be used in many ways, with businesses deciding how and where. However, with so few examples of how it can go wrong (at least in the public domain), businesses are blindly feeding AI systems data with little to no regard of the ethical implications.</p>



<h3 class="wp-block-heading">Why ethical AI is so important</h3>



<p>Ethics are critical to automated decision-making processes where AI is used. Without some consideration for how decisions are naturally made by humans, there is no way we can expect our AI systems to behave ethically.</p>



<p>Take the Volkswagen emissions scandal. Back in 2015, thousands of diesel VWs were sold across the globe with software that could sense test scenarios and change their performance to show reduced carbon emissions. Once back on the road, they would switch back to ‘normal’, emitting up to 40% more carbon dioxide than the tests would have shown.</p>



<p>In this case the test engineers were following orders, so the question of who was responsible might have been unclear. However, the judicial response was that the engineers could have raised the issue or left the organisation, so liability lay with them.</p>



<p>The same could apply to data scientists in another scenario. If there is the realisation that elements of decision-making could cause bias and harm, they have the option and obligation to flag or depart.</p>



<p>A final example might be the recent Boeing 737 Max disaster, where decisions made by software were overriding the decisions made by qualified pilots, leading to numerous air crashes and the company’s entire fleet being grounded.</p>



<p>These fledgling software devices, if not trained properly, have the potential to completely damage the reputation of a company, especially while liability is still in discussion.</p>



<h3 class="wp-block-heading">How biases are introduced and who’s responsible</h3>



<p>Although humans are the main source of these biases, there can also be bias in data, and if we aren’t careful, AI will accentuate them.</p>



<p>A lack of representation in industry is also increasingly being cited as the root cause of the problem in data and while the question of liability if still being widely debated, I believe it’s important for business leaders to take more responsibility for unintentionally infusing bias in an AI system.</p>



<p>As humans, we will always be prone to making mistakes, but unlike machines we have ‘human qualities’ such as consciousness and judgement that come into play to correct the mistakes made over time.</p>



<p>However, unless these machines are explicitly taught that what they are doing is ‘wrong’ or ‘unfair’, the error will continue.</p>



<p>In my view, and I’m sure many others, blindly allowing these AI systems to continue making mistakes is irresponsible. And, when things do go wrong, which they inevitably will, we need to ask ourselves who is liable? Is it the machine, the data scientist or the owner of the data?</p>



<p>The question is still being debated within industry but as errors become more public, we will start to learn and understand when they’re investigated.</p>



<h3 class="wp-block-heading">How can we remove these biases?</h3>



<p>To ensure decision-making is fair and equal for all, manufacturers need to get better at thoroughly investigating the decision-making process to ensure there’s no bias on the part of the human, who will often act on an unintentional and unconscious basis.</p>



<p>This reduces or eliminates the chances of the biases being misinterpreted by the AI and potential errors being proliferated.</p>



<p>I’d like to see a benchmark set for businesses, either through a series of questions or a thorough checklist, to guarantee any bias on the part of the human is eradicated at the outset. The checklist would ensure all decision-making is fair and equal, accountable, safe, reliable, secure and addresses privacy aspects.</p>



<p>The checklist could be used for in-house data science teams, especially as an induction tool for new recruits, or for external companies that are outsourced by businesses to build and manage their AI systems.</p>



<p>If manufacturers do decide to outsource aspects of their machine learning capabilities, this checklist is especially pertinent as it acts as a form of contract, whereby any potential disputes over liability can more easily be resolved.</p>



<p>As we’re still in the early stages of AI, it’s unclear whether these measures would be legally binding, but they may go some way to proving – to an insurance company or lawyer – where liability lies.</p>



<p>If a manufacturer can demonstrate that a checklist has or hasn’t been followed, depending on whether the work has been kept in-house or otherwise, they are more protected than they might have been before.</p>



<p>Another part of benchmarking could be to ensure all data scientists within a business, whether new to the role or experienced technicians, take part in a course on ethics in AI.</p>



<p>This could also help people understand or remember the need to remove certain parameters from the decision-making process, for example NY gender biases. This way, when building a new AI system which takes male and female activity into account, they’ll know to deactivate the gender feature to ensure the system is gender neutral.</p>



<p>This article isn’t designed to scare people, rather, it’s to urge business leaders to stop overlooking potential biases in the automated-decision-making process to ensure decisions are fair and equal for everyone.</p>



<p>There’s always a chance that human bias will creep in, but it’s down to us to take the necessary steps to ensure processes are fair and transparent. And the quicker and more efficiently we set up a benchmark from which to work, the more likely we are to build fairer and more ethical AI systems.</p>
<p>The post <a href="https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/">Blindly using data to make decisions doesn’t create ethical AI systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/blindly-using-data-to-make-decisions-doesnt-create-ethical-ai-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
