<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>algorithm learning Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/algorithm-learning/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/algorithm-learning/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 18 Nov 2019 05:35:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Will Machine Learning Algorithms Erase The Progress Of The Fair Housing Act?</title>
		<link>https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/</link>
					<comments>https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 Nov 2019 05:35:28 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm learning]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5224</guid>

					<description><![CDATA[<p>Source:- forbes.com This August, the Department of Housing and Urban Development put forth a proposed ruling that could potentially turn back the clock on the Fair Housing Act (FHA). <a class="read-more-link" href="https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/">Will Machine Learning Algorithms Erase The Progress Of The Fair Housing Act?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:- forbes.com<br></p>



<p>This August, the Department of Housing and Urban Development put forth a proposed ruling that could potentially turn back the clock on the Fair Housing Act (FHA). This ruling states that landlords, lenders, and property sellers who use third-party machine learning algorithms to decide who gets approved for a loan or who can purchase or rent a property would not be held responsible for any discrimination resulting from these algorithms.</p>



<p><strong>The Fair Housing Act</strong></p>



<p>The Fair Housing Act (FHA) is a part of the Civil Rights Act of 1968. This stated that people should not be discriminated against for the purchase of a home, rental of a property or qualification of a lease based on race, national origin or religion. In 1974, this was expanded to include gender, and in 1988, disability. Discrimination based on sexual orientation or identity is banned in some states or locations. A report on the FHA issued in 2018 by the National Fair Housing Alliance, after analyzing 50 years of data, stated there is still a long way to go.</p>



<h3 class="wp-block-heading">The Proposed Ruling</h3>



<p>The proposed ruling addresses the fact that many decisions on who gets approved for a loan or a lease or who is allowed to purchase a property now rely on machine learning algorithms. These algorithms allow near-instantaneous approval by sifting through enormous data sets to determine who is the most likely to, say, pay back their loan. And in today’s data-driven society, machine learning algorithms are found everywhere and simplify the process in cases where a human cannot sort through massive amounts of data. They are used for everything from determining who qualifies for a credit card, what ads to show you on the internet or what to suggest to you on Netflix.</p>



<p>Some believe that, by handing the decision making over to software, any human discrimination (unconscious or otherwise) that may exist would be eliminated. But to think that algorithms don’t suffer from bias of their own would be incorrect.</p>



<p>Drafters of the proposed ruling don’t deny that bias may result in these algorithms. However, the controversy arises in asking who should be held responsible if discrimination results. The proposal states that lenders and sellers who use these algorithms should not be held accountable for the bias.</p>



<h3 class="wp-block-heading">How Machine Learning Algorithms Work</h3>



<p>You can devise a simple algorithm on a piece of paper to approve or deny a loan. If, say, people in the top 60% of credit scores tend to reliably pay off their loans, you can devise an algorithm where applicants are sorted in order of credit score, and then approve those who fall within the top 60%.</p>



<p>Thanks to modern-day data science, algorithms can be far more complex than this. They can feed in millions of data points. Each applicant for a loan or a lease could have not only credit score associated with their name, but also their shopping history, education, internet browsing history, who they associate with on social media, health history, employment, or what kind of candy bar they prefer.</p>



<p>Machine learning algorithms can take this data, parts of unimaginably large data sets, along with data of thousands or millions of other applicants, and draw complex connections.</p>



<p>For example, no human is writing a code that says if Applicant A goes to Yale and likes Snickers candy bars, then their loan is approved. Instead, the machine learning algorithm itself identifies and makes conclusions about correlations it may find. Many people have likened machine learning algorithms to black boxes. While this is not entirely accurate, sometimes the algorithm finds such complex, subtle correlations that any human, even the designer of the software, would be hard-pressed to understand why the algorithm approved or denied a loan.</p>



<p>So how does bias enter? If the algorithm is not fed information such as gender, race, national origin, religion, and so on, can it still be biased in these ways? Let’s look at our example above. Perhaps the algorithm discovers that people who go to certain Ivy League schools who live in a predominantly white neighborhood are more likely to pay off their loans. And people who mostly shop at dollar stores and frequent fast-food restaurants do not. This may reflect the income level of their parents, which could reflect on family history, hometown, and race. It may see find a correlation when looking at a previous address, and tend to not approve applicants who are moving from low-income neighborhoods. It may see that the applicants’ connections on social media are in debt themselves. Or it may make a connection between these or any number of other data points. What’s more, the connections the algorithm makes could be so subtle and complex that it would be difficult, if not impossible, to trace back exactly why the algorithm made the recommendation it did.</p>



<p>Now you’re beginning to see the problem.</p>



<h3 class="wp-block-heading">The Problem With The Proposal</h3>



<p>The drafters of the proposal admit that bias can result from machine learning algorithms. However, the proposal drastically limits the recourse of those who feel that they have been discriminated against &#8211; so much so that it may be impossible to show discrimination existed.</p>



<p>If a particular person feels like they have been discriminated against, the proposal states that the algorithm needs to be broken down, piece by piece. “A defendant (the lending agency, landlord, or seller) will succeed under this defense where the plaintiff (the discriminated party) is unable to then show that the defendant&#8217;s analysis is somehow flawed, such as by showing that a factor used in the model is correlated with a protected class despite the defendant&#8217;s assertion.”</p>



<p>The problem is this &#8211; algorithms such as this cannot be broken down piece by piece. They are exceedingly complex. On October 10th, the Interdisciplinary Working Group on Algorithmic Justice &#8211; a group of ten computer scientists, legal scholars, and social scientists from the Santa Fe Institute and the University of New Mexico &#8211; submitted a formal response to this proposal. They state that decisions that algorithms make can be very subtle, and what the amendment proposes does not fully appreciate how algorithms actually work.</p>



<p>What’s more, there may not be one single factor leading to discrimination. A “disparate impact can occur if any combination of input factors, combined in any way, can act as a proxy for race or another protected characteristic,” the authors state. This means that not only individual factors but the connections between them are being used to determine whether or not someone is approved for a lease or a loan or to purchase property. There is no way to pinpoint one factor that contributes to discrimination.</p>



<h3 class="wp-block-heading">How Can It Be Improved?</h3>



<p>Algorithms are probably here to stay. So if there is discrimination, who is to blame? Is it the lender and the landlord? Or the drafter of the algorithm?</p>



<p>Perhaps there is another way.</p>



<p>The Interdisciplinary Working Group on Algorithmic Justice suggests that transparency is the key. These algorithms cannot hide behind the curtain of intellectual property. For those algorithms that are like a “black box”, independent auditors need to continually test these algorithms by feeding in a set of false data to see what biases result.</p>



<p>Will algorithm providers agree to this? That’s unclear. Often times, providers of the algorithms consider this “reverse-engineering” and do not allow this. At the same time, the Interdisciplinary Working Group on Algorithmic Justice suggests that it is not reasonable to allow lenders, landlords, and sellers to defer all responsibility. “The proposed regulation is so focused on assuring that mortgage lenders and landlords can make profits, it loses sight of the potential for algorithms to rapidly reverse that progress [from the FHA]”, they state. They later continue, “We are entering an algorithmic age&#8230; Our best recourse is to vigorously subject them to the test of disparate impact.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/">Will Machine Learning Algorithms Erase The Progress Of The Fair Housing Act?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A world ruled by robots? This artificial intelligence expert paints a different reality</title>
		<link>https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/</link>
					<comments>https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 18 Jul 2018 06:28:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[algorithm learning]]></category>
		<category><![CDATA[computer science]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2629</guid>

					<description><![CDATA[<p>Source &#8211; channelnewsasia.com University of Washington professor Pedro Domingos shot to prominence after his book was seen on China president Xi Jinping’s bookshelf during the leader’s annual New <a class="read-more-link" href="https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/">A world ruled by robots? This artificial intelligence expert paints a different reality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; channelnewsasia.com</p>
<p>University of Washington professor Pedro Domingos shot to prominence after his book was seen on China president Xi Jinping’s bookshelf during the leader’s annual New Year’s Day greetings this year.</p>
<p>SINGAPORE: Artificial intelligence: Two words that have been bandied about everywhere to seemingly give anything a shimmer of technological star dust.</p>
<p>Looking for a washing machine? Choose the latest AI-enabled machine to better clean your clothes. Seeking help from your service provider? There’s an AI chatbot waiting to answer all your queries. Or so it seems.</p>
<p>Yet, the buzzword also throws up questions for many. Does artificial intelligence mean robots? Is it going to take away my job? Is it going to take over humanity?</p>
<p>This is where Mr Pedro Domingos comes in. The professor in University of Washington’s Computer Science and Engineering department is widely known as a thought leader in this field. His book, The Master Algorithm, was even seen on the bookshelf of China president Xi Jinping – a big proponent of AI – at the start of the year.</p>
<p>Channel NewsAsia spoke to Mr Domingos, who was in town at the invitation of StarHub, on Tuesday (Jul 17), to put some of the more common AI misconceptions to bed and get his views on whether Skynet is, indeed, coming.</p>
<p><strong>Q: What is AI and how would you explain it to a 5-year-old?</strong></p>
<p><strong>Domingos</strong>: AI is getting machines to do what traditionally needs human intelligence to do.</p>
<p>Things like reasoning, problem solving, common sense, knowledge, understanding what you see, understanding speech and language, and learning. Computers traditionally couldn’t do these, and getting them to do so is what AI is all about.</p>
<p>For a five-year-old, it would be like a toy that they can play with, like another child. Remember those Sony Aibo dogs? They didn’t have a lot of AI, but they were entertaining to children. Now imagine an AI-enabled “dog” that was more like a real dog – that’s the kind of things we want to do with AI.</p>
<p><strong>Q: AI equals robots. Is this correct?</strong></p>
<p><strong>Domingos</strong>: They are related, but they are different. A robot is a machine, the brain of the machine is the computer. AI is to a robot the same way the brain is to your body.</p>
<p><strong>Q: Will these AI-powered robots snatch away our jobs?</strong></p>
<p><strong>Domingos</strong>: I think AI and robots will cause a lot of changes in the job market. Some jobs might disappear, but I think a lot more jobs will appear than disappear. This has always been the case with automation.</p>
<p>What jobs are at risk, you may ask. A truck driver, for example. If there’re self-driving trucks, then truck drivers might lose their jobs. I think in the short term, we will have self-driving trucks on the freeway, but in the cities, it will be truck drivers. What it will do is alleviate the shortage of truck drivers. But in the long run, truck driving as a job will cease.</p>
<p>Having said that, when ATMs were introduced, people thought this will put bank tellers out of work. Right? Because they do the same job. But there are actually more bank tellers today than there were before ATMs.</p>
<p>What happened is now the bank tellers do all sorts of things besides give cash to people, and I think the same will be true of AI. We will just have people doing very different things from the ones they do today, because (these things) became economically feasible.</p>
<p><strong>Q: AI is powering the use of facial recognition in countries like China and, with it, raising the spectre of Big Brother societies. Will we live in a Minority Report-like world soon?</strong></p>
<p><strong>Domingos</strong>: There is that danger. AI for an authoritarian regime is an amazing tool, and facial recognition is an example. I think if you want to use AI for oppressive purposes like controlling your population, there is definitely a lot of opportunities to do that.</p>
<p>But AI can also be used by the people to give themselves more power. AI is like any technology; It gives power to those who have it, so the question is who is going to have it: Is it governments, is it large companies, or is it all of us?</p>
<p>I think it should be all of us. But before we – as citizens, as individuals, as professionals &#8211; can take advantage of AI and use it for our ends, we have to understand what it does. Not at a fine level, but it’s like knowing how to operate a car – You don’t have to understand how the engine works, but you need to understand how the steering wheel and the pedals work.</p>
<p>One of the things people should be aware of is, these days, every time they interact with anything online, chances are there is an algorithm learning what they do, what they like, how they behave. And you should realise that you’re teaching the system every time you use it.</p>
<p>For example, if you’re on Netflix and you choose a movie, you should realise that tomorrow it is going to recommend similar movies.</p>
<p>However, people should demand of tech companies, Amazon for example, and say: “Don’t show me more watches because I just bought one.” Right now, we can’t do those things, not because the AI is not capable but because the tech companies are not providing these options.</p>
<p>Users should demand a higher level of control from all these AI systems than they have right now.</p>
<p><strong>Q: Going a step further, will robots control us in a Skynet-type reality seen in the Terminator movies?</strong></p>
<p><strong>Domingos</strong>: I think Skynet is not going to happen and we’re not going to be ruled by robots.</p>
<p>I think it is interesting to understand why it’s so prominent in people’s minds though, other than it makes a good story and we’ve seen that in Terminator.</p>
<p>I think it’s very unlikely that machines will spontaneously turn evil and decide to exterminate us, because the machines, no matter how intelligent they are, only use their intelligence to achieve the goals we set for them. So as long as we set those goals and set the constraints, then we can check the results and it’s very unlikely that the machines will get out of control.</p>
<p>Having said that, there can be bad actors that decide to program machines for bad purposes, and those we have to worry about. Criminals or authoritarian governments &#8211; these are all potential issues that could lead to bad uses of AI.</p>
<p>The other danger we have to worry about is that we increasingly put control of important things in the hands of AI, but they (the machines) are not that smart. They make mistakes. They have no common sense. They take you too literally. They give you what they think you want instead of what you really want.</p>
<p>This goes back to the Skynet scenario: As soon as we see anything that exhibits even a small amount of intelligence, we immediately project on to it our human qualities that it doesn’t have because the only intelligence we know is ours. Like free will, and consciousness, and all of that; they don’t have it. They are just problem-solving engines.</p>
<p>So people worry that computers will get too smart and take over the world, but the real problem is that they are too stupid. And they have already taken over the world.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/">A world ruled by robots? This artificial intelligence expert paints a different reality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-world-ruled-by-robots-this-artificial-intelligence-expert-paints-a-different-reality/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
