<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>machine-learning algorithm Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machine-learning-algorithm/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machine-learning-algorithm/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 24 Mar 2018 05:59:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Artificial Intelligence lifeline for India’s flailing healthcare</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 24 Mar 2018 05:59:34 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI applications]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[machine-learning algorithm]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2149</guid>

					<description><![CDATA[<p>Source &#8211; financialexpress.com India must take a leaf from China’s book on improving healthcare. China, which, in 2015, had 3.6 physicians for every 1,000 population is deploying artificial <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/">Artificial Intelligence lifeline for India’s flailing healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; financialexpress.com</p>
<div class="main-story-content">
<p>India must take a leaf from China’s book on improving healthcare. China, which, in 2015, had 3.6 physicians for every 1,000 population is deploying artificial intelligence (AI) in a big way to make up with automation what it lacks in manpower in healthcare. An article in the MIT Technology Review (MTR) talks of how it is easing work in many areas of healthcare, from diagnostics to dentistry—in fact, the Chinese government has made computerised medical diagnosis one of the cornerstones of its grand plan to embrace AI by 2020. While over 130 companies are already working on AI applications in the country’s healthcare sector, the IDC estimate of a $930-million market in China for AI-led healthcare services by 2022 means more of such companies are expected to come online. China’s homegrown tech giants, Alibaba and Tencent, have also placed significant bets on AI diagnostic tools.</p>
<p>The MTR report talks about how cancer-radiologists in more than 20 Chinese hospitals are using a neural network that helps them identify potential malignancies, drawing from thousands of reports, images and conclusions drawn by medical professionals. While it makes the doctors’ job easier, the latter also help refine it by correcting any mistakes it throws up; the start-up that created the software has netted over 180 hospitals so far as research collaborators.</p>
<p>Research collaboration between a Beijing-based oncologist and scientists at the Tsinghua University aims to develop a machine-learning algorithm that will detect blood clots, linked to lymphoma treatment, from ultrasound data. Early detection can avoid the complications from clots, but Chinese hospitals, quite like Indian ones, are often hard-pressed for resources, which makes it difficult to screen each patient. Thus, attention to blood clots becomes hostage to the onset of symptoms—which typically require an emergency response.</p>
<p>In India, the healthcare infrastructure is inadequate on many fronts. The country, in 2016, had just 0.75 physicians per 1,000 population. Public spending on health, per capita, in 2015 stood at a mere $16.2 compared with China’s $254.4 and the US’s whopping $4,810. This meant the government expenditure on health in India, as proportion of the country’s overall health spending, was just 25% compared with China’s 60%. While improving the density of physicians, diagnostic facilities, hospitals and health centres, and other indicators of healthcare adequacy are an important goal, India could definitely benefit from taking a cue from China on deploying AI.</p>
<p>To be sure, it is not as if India has not moved on this at all. Private start-ups like Bengaluru-based Niramai, that is using machine-learning and big data analytics to develop low-cost, accurate and pain-free breast cancer screening, are already filling some gaps. AI is even being used in hospital management—Max Healthcare in Delhi is using it to monitor the health of critical care patients; this has helped it to free up ICU beds faster and is reported to be saving patients almost 30% of typical critical care costs.</p>
<p>However, giving healthcare a decisively AI focus will need more centralised action. China’s drug approval and regulatory regime has already incorporated many AI diagnostic tools into its lists of permitted medical devices/technology, though there is some degree of price controlling. Also, it helps that China is looking at AI and machine-learning less from a perspective of potential job losses and more from a perspective of productivity gains with work for skilled workers made easier. India, on the other hand, is yet to even articulate a comprehensive vision on AI, let alone on AI in healthcare.</p>
</div>
<div class="common-bottom-text">
<p>&nbsp;</p>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/">Artificial Intelligence lifeline for India’s flailing healthcare</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-lifeline-for-indias-flailing-healthcare/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Why we should study about biases in artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 01 Nov 2017 06:42:44 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[machine-learning algorithm]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1608</guid>

					<description><![CDATA[<p>Source &#8211; yourstory.com There is a lack of research as to quantify how bias in AI systems are damaging individuals and societies. You have just mailed your CV <a class="read-more-link" href="https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/">Why we should study about biases in artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> yourstory.com</strong></p>
<p><em>There is a lack of research as to quantify how bias in AI systems are damaging individuals and societies.</em></p>
<p>You have just mailed your CV to get your dream job. You are anxiously waiting for a response from the company. This could be an opportunity of your lifetime. But wait, there is a little twist. Your destiny hangs in the hands of a machine learning algorithm. The company just decided to hand over (outsource) preliminary filtering of CVs to an artificial intelligence (AI) company because there were too many applications to handle. I am sorry to tell you that a study showed that an identical CV is 50 percent more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. What do you do ?</p>
<p>Bias is an “inclination or prejudice for or against one person or group, especially in a way considered to be unfair.” We heard Elon Musk calling AI our “biggest existential threat.”  Although more and more AI experts have spoken up against such pessimism and said AI agents today are far from true intelligence.</p>
<p>“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most,” Musk said.</p>
<p>This debate will go on.</p>
<h2><strong>Most urgent matters of concern in AI</strong></h2>
<p>In the shadow of this debate, there is a lurking debate that gets ignored much of the time. It is about how AI systems are biased. About how we have transmitted knowingly and unknowingly various forms of biases in the intelligent systems we build.</p>
<p>As years pass by there will be an increased chance that some form of artificially intelligent agent will enter more and more areas of your life. Your Facebook feed is an important internet real estate where your social and political ideas are influenced. Imagine your newsfeed recommendation engine on Facebook (and on other platforms) being plagued by subtle biases that would have crept into the system either through biased datasets or interactions.</p>
<p>Self-driving cars, loan prediction engines, healthcare systems and many other systems are plagued by biases. Machine learning systems learn through data and interactions. Data accumulated by engineers and data scientists through crowdsourcing or even existing open-source datasets contain all forms of biases humans are prone to. Thus it is fair to say that machine learning algorithms are not biased. Humans are biased and we are transmitting this bias in AI systems through the data we produce.</p>
<p>And the real alarming thing to note is that very few people care and are making a real effort to repair these faults. Important stakeholders and companies which develop these systems show little interest in searching for and eliminating biases. To put it in plain and clear terms: if your AI service provider doesn’t clearly mention/inform how they have trained your system or which data has been used, you shouldn’t trust them. Especially when the applications are in critical areas like medical support systems and credit approval. The first step would be just to recognise this as a problem.</p>
<p>A long-term strategy is needed to train data scientists to identify and correct for biases found in the data whenever they are building intelligent systems in critical application areas.  As we keep on replacing human thought processes with machine learning algorithms, we tend to rest assured. There is a tendency to trust machine algorithms more than humans and this is obviously a worry.</p>
<p>In the short term, data scientists in the industry should emulate practices of social scientists who have long formed professional habits of questioning data sources and what methods were used to gather and analyse data.  Rigorous and detail-oriented data collection methods bring certain amount of context towards a problem being studied. The more nuanced the data used to train algorithms, the more we can expect the machine learning model to be free from bias.</p>
<h2><strong>Digital divide and flawed datasets</strong></h2>
<p>Many forms of biases come from a lack of equitable access and digital divide.  We increasingly rely on big data sources (mostly) coming from affluent societies and people with access to digital devices and the internet. For example, the Twitter sentiment analyser you just built about the election didn’t have any representation from villages where people don’t have access to the internet. We simply assume that data accurately reflects the ground reality but sadly this is far from the truth.</p>
<p>Consider, a popular benchmark for facial recognition is Labeled Faces in the Wild (LFW), a collection of more than 13,000 face photos. This dataset consists of 83 percent of the photos of white people and nearly 78 percent are of men. This is a perfect example of flawed benchmarks and biased datasets at the same time. If your startup, company or your AI vendor is using this dataset to build a face detection system, it is flawed and, more importantly, heavily biased.</p>
<p>There is also a lack of research as to quantify how biases in AI systems are damaging individuals and societies. There are only funny anecdotes about how a voice recognition system didn’t identify one’s voice. But there are no concrete studies to show how different and mostly used AI systems discriminate against sections of the society. Acknowledging that bias exists, itself is a good start.</p>
<p>One of the fundamental questions is how do you define fairness? What makes an algorithm fair? Different scientific studies use different ideas of algorithmic fairness, and although these appear internally consistent, they are also incompatible. <strong>Statistical parity</strong> is one way to define fairness. Statistical parity is a measure to understand how algorithms behave with a “protected” subset of the population.</p>
<h2><strong>Policy In AI</strong></h2>
<p>There can be biases that are socially acceptable and biases that are not. There has to be extensive policy research before we come to a conclusion as to which biases are acceptable. Policy intervention is also somewhat necessary, because most of the practitioners demonstrate a lack of understanding of fairness of algorithms and the management doesn’t go into the details of the algorithm. Especially in today’s age it is extremely difficult for anyone to keep up with the pace of development as firms like Google and Facebook have radically decreased the gap between academia and industry.</p>
<p>Policy studies in AI also become important as there exists no mechanism to audit algorithms for “disparate impact”. Disparate impact happens when neutral-sounding rules disproportionately affect a legally-protected group. People often find it difficult to prove and study disparate impact even when algorithms are not involved. In legal matters disparate impacts are only illegal when there clearly exists an alternative way to carry out a said procedure. Because of this it will become difficult to prove machine learning algorithms having  a disparate impact. Presently, deep learning algorithms are moving beyond our current abilities to analyse them.</p>
<p>To sum it up, the present and future success of AI generally depends on its ability to:</p>
<ol>
<li>Understand bias and discrimination problems in AI systems</li>
<li>Detect biases in AI algorithms</li>
<li>Study and put in practice processes to build bias-free AI systems</li>
<li>Study and put in place mechanisms to eliminate biases without taking away the powers of these algorithms.</li>
</ol>
<p>The post <a href="https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/">Why we should study about biases in artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-we-should-study-about-biases-in-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning mobile microscope measures air quality</title>
		<link>https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 11 Sep 2017 09:10:16 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[automatically analyses]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine-learning algorithm]]></category>
		<category><![CDATA[mobile microscope]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1049</guid>

					<description><![CDATA[<p>Source &#8211; laboratorytalk.com The device, called c-Air, is intended to give more people around the world the ability to accurately detect dangerous airborne particulate matter, the researchers <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/">Machine learning mobile microscope measures air quality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>laboratorytalk.com</strong></p>
<p>The device, called c-Air, is intended to give more people around the world the ability to accurately detect dangerous airborne particulate matter, the researchers said.</p>
<p>It works by detecting pollutants and determining their concentration and size using a mobile microscope connected to a smartphone and a machine-learning algorithm that automatically analyses the images of the pollutants.</p>
<p>According to the researchers, c-Air is just as accurate as current higher-end equipment, but could cost tens of thousands of dollars less. It comprises an air sampler and a holographic microscope about the size of a computer chip. It can screen 6.5 litres of air in 30 seconds and generates images of the airborne particles.</p>
<p>UCLA professor Aydogan Ozcan, who led the research team, said: &#8220;With lab-quality devices in the hands of more people, high-quality data on pollutants as a function of time from many more locations can be collected and analysed.</p>
<p>&#8220;That can then help governments develop better policies and regulations to improve air quality.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/">Machine learning mobile microscope measures air quality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-mobile-microscope-measures-air-quality/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
