Will Machine Learning Algorithms Erase The Progress Of The Fair Housing Act?

Source:- forbes.com

This August, the Department of Housing and Urban Development put forth a proposed ruling that could potentially turn back the clock on the Fair Housing Act (FHA). This ruling states that landlords, lenders, and property sellers who use third-party machine learning algorithms to decide who gets approved for a loan or who can purchase or rent a property would not be held responsible for any discrimination resulting from these algorithms.

The Fair Housing Act

The Fair Housing Act (FHA) is a part of the Civil Rights Act of 1968. This stated that people should not be discriminated against for the purchase of a home, rental of a property or qualification of a lease based on race, national origin or religion. In 1974, this was expanded to include gender, and in 1988, disability. Discrimination based on sexual orientation or identity is banned in some states or locations. A report on the FHA issued in 2018 by the National Fair Housing Alliance, after analyzing 50 years of data, stated there is still a long way to go.

The Proposed Ruling

The proposed ruling addresses the fact that many decisions on who gets approved for a loan or a lease or who is allowed to purchase a property now rely on machine learning algorithms. These algorithms allow near-instantaneous approval by sifting through enormous data sets to determine who is the most likely to, say, pay back their loan. And in today’s data-driven society, machine learning algorithms are found everywhere and simplify the process in cases where a human cannot sort through massive amounts of data. They are used for everything from determining who qualifies for a credit card, what ads to show you on the internet or what to suggest to you on Netflix.

Some believe that, by handing the decision making over to software, any human discrimination (unconscious or otherwise) that may exist would be eliminated. But to think that algorithms don’t suffer from bias of their own would be incorrect.

Drafters of the proposed ruling don’t deny that bias may result in these algorithms. However, the controversy arises in asking who should be held responsible if discrimination results. The proposal states that lenders and sellers who use these algorithms should not be held accountable for the bias.

How Machine Learning Algorithms Work

You can devise a simple algorithm on a piece of paper to approve or deny a loan. If, say, people in the top 60% of credit scores tend to reliably pay off their loans, you can devise an algorithm where applicants are sorted in order of credit score, and then approve those who fall within the top 60%.

Thanks to modern-day data science, algorithms can be far more complex than this. They can feed in millions of data points. Each applicant for a loan or a lease could have not only credit score associated with their name, but also their shopping history, education, internet browsing history, who they associate with on social media, health history, employment, or what kind of candy bar they prefer.

Machine learning algorithms can take this data, parts of unimaginably large data sets, along with data of thousands or millions of other applicants, and draw complex connections.

For example, no human is writing a code that says if Applicant A goes to Yale and likes Snickers candy bars, then their loan is approved. Instead, the machine learning algorithm itself identifies and makes conclusions about correlations it may find. Many people have likened machine learning algorithms to black boxes. While this is not entirely accurate, sometimes the algorithm finds such complex, subtle correlations that any human, even the designer of the software, would be hard-pressed to understand why the algorithm approved or denied a loan.

So how does bias enter? If the algorithm is not fed information such as gender, race, national origin, religion, and so on, can it still be biased in these ways? Let’s look at our example above. Perhaps the algorithm discovers that people who go to certain Ivy League schools who live in a predominantly white neighborhood are more likely to pay off their loans. And people who mostly shop at dollar stores and frequent fast-food restaurants do not. This may reflect the income level of their parents, which could reflect on family history, hometown, and race. It may see find a correlation when looking at a previous address, and tend to not approve applicants who are moving from low-income neighborhoods. It may see that the applicants’ connections on social media are in debt themselves. Or it may make a connection between these or any number of other data points. What’s more, the connections the algorithm makes could be so subtle and complex that it would be difficult, if not impossible, to trace back exactly why the algorithm made the recommendation it did.

Now you’re beginning to see the problem.

The Problem With The Proposal

The drafters of the proposal admit that bias can result from machine learning algorithms. However, the proposal drastically limits the recourse of those who feel that they have been discriminated against – so much so that it may be impossible to show discrimination existed.

If a particular person feels like they have been discriminated against, the proposal states that the algorithm needs to be broken down, piece by piece. “A defendant (the lending agency, landlord, or seller) will succeed under this defense where the plaintiff (the discriminated party) is unable to then show that the defendant’s analysis is somehow flawed, such as by showing that a factor used in the model is correlated with a protected class despite the defendant’s assertion.”

The problem is this – algorithms such as this cannot be broken down piece by piece. They are exceedingly complex. On October 10th, the Interdisciplinary Working Group on Algorithmic Justice – a group of ten computer scientists, legal scholars, and social scientists from the Santa Fe Institute and the University of New Mexico – submitted a formal response to this proposal. They state that decisions that algorithms make can be very subtle, and what the amendment proposes does not fully appreciate how algorithms actually work.

What’s more, there may not be one single factor leading to discrimination. A “disparate impact can occur if any combination of input factors, combined in any way, can act as a proxy for race or another protected characteristic,” the authors state. This means that not only individual factors but the connections between them are being used to determine whether or not someone is approved for a lease or a loan or to purchase property. There is no way to pinpoint one factor that contributes to discrimination.

How Can It Be Improved?

Algorithms are probably here to stay. So if there is discrimination, who is to blame? Is it the lender and the landlord? Or the drafter of the algorithm?

Perhaps there is another way.

The Interdisciplinary Working Group on Algorithmic Justice suggests that transparency is the key. These algorithms cannot hide behind the curtain of intellectual property. For those algorithms that are like a “black box”, independent auditors need to continually test these algorithms by feeding in a set of false data to see what biases result.

Will algorithm providers agree to this? That’s unclear. Often times, providers of the algorithms consider this “reverse-engineering” and do not allow this. At the same time, the Interdisciplinary Working Group on Algorithmic Justice suggests that it is not reasonable to allow lenders, landlords, and sellers to defer all responsibility. “The proposed regulation is so focused on assuring that mortgage lenders and landlords can make profits, it loses sight of the potential for algorithms to rapidly reverse that progress [from the FHA]”, they state. They later continue, “We are entering an algorithmic age… Our best recourse is to vigorously subject them to the test of disparate impact.”

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence