6 ways to reduce different types of bias in machine learning

Source: searchenterpriseai.techtarget.com

As companies step up the use of machine learning-enabled systems in their day-to-day operations, they become increasingly reliant on those systems to help them make critical business decisions. In some cases, the machine learning systems operate autonomously, making it especially important that the automated decision-making works as intended.

However, machine learning-based systems are only as good as the data that’s used to train them. If there are inherent biases in the data used to feed a machine learning algorithm, the result could be systems that are untrustworthy and potentially harmful.

In this article, you’ll learn why bias in AI systems is a cause for concern, how to identify different types of biases and six effective methods for reducing bias in machine learning.

Why is eliminating bias important?

The power of machine learning comes from its ability to learn from data and apply that learning experience to new data the systems have never seen before. However, one of the challenges data scientists have is ensuring that the data that’s fed into machine learning algorithms is not only clean, accurate and — in the case of supervised learning, well-labeled — but also free of any inherently biased data that can skew machine learning results.

The power of supervised learning, one of the core approaches to machine learning, in particular depends heavily on the quality of the training data. So it should be no surprise that when biased training data is used to teach these systems, the results are biased AI systems. Biased AI systems that are put into implementation can cause problems, especially when used in automated decision-making systems, autonomous operation, or facial recognition software that makes predictions or renders judgment on individuals.

Some notable examples of the bad outcomes caused by algorithmic bias include: a Google image recognition system that misidentified images of minorities in an offensive way; automated credit applications from Goldman Sachs that have sparked an investigation into gender bias; and a racially biased AI program used to sentence criminals. Enterprises must be hyper-vigilant about machine learning bias: Any value delivered by AI and machine learning systems in terms of efficiency or productivity will be wiped out if the algorithms discriminate against individuals and subsets of the population.

However, AI bias is not only limited to discrimination against individuals. Biased data sets can jeopardize business processes when applied to objects and data of all types. For example, take a machine learning model that was trained to recognize wedding dresses. If the model was trained using Western data, then wedding dresses would be categorized primarily by identifying shades of white. This model would fail in non-Western countries where colorful wedding dresses are more commonly accepted. Errors also abound where data sets have bias in terms of the time of day when data was collected, the condition of the data and other factors.

All of the examples described above represent some sort of bias that was introduced by humans as part of their data selection and identification methods for training the machine learning model. Because the systems technologists build are necessarily colored by their own experiences, they must be very aware that their individual biases can jeopardize the quality of the training data. Individual bias, in turn, can easily become a systemic bias as bad predictions and unfair outcomes are automated.

How to identify and measure AI bias

Part of the challenge of identifying bias is due to the difficulty of seeing how some machine learning algorithms generalize their learning from the training data. In particular, deep learning algorithms have proven to be remarkably powerful in their capabilities. This approach to neural networks leverages large quantities of data, high performance compute power and a sophisticated approach to efficiency, resulting in machine learning models with profound abilities.

Deep learning, however, is a “black box.” It’s not clear how an individual decision was arrived at by the neural network predictive model. You can’t simply query the system and determine with precision which inputs resulted in which outputs. This makes it hard to spot and eliminate potential biases when they arise in the results. Researchers are increasingly turning their focus on adding explainability to neural networks. Verification is the process of proving the properties of neural networks. However, because of the size of neural networks, it can be hard to check them for bias.

Until we have truly explainable systems, we must understand how to recognize and measure AI bias in machine learning models. Some of the biases in the data sets arise from the selection of training data sets. The model needs to represent the data as it exists in the real world. If your data set is artificially constrained to a subset of the population, you will get skewed results in the real world, even if it performs very well against training data. Likewise, data scientists must take care in how they select which data to include in a training data set and which features or dimensions are included in the data for machine learning training.

Companies are combating inherent data bias by implementing programs to not only broaden the diversity of their data sets, but also the diversity of their teams. More diversity on teams means that people of many perspectives and varied experiences are feeding systems the data points to learn from. Unfortunately, the tech industry today is very homogeneous; there are not many women or people of color in the field. Efforts to diversify teams should also have a positive impact on the machine learning models produced, since data science teams will be better able to understand the requirements for more representative data sets.

Different types of machine learning bias

There are a few sources for the bias that can have an adverse impact on machine learning models. Some of these are represented in the data that is collected and others in the methods used to sample, aggregate, filter and enhance that data.

  • Sampling bias. One common form of bias results from mistakes made when collecting data. A sampling bias happens when data is collected in a manner that oversamples from one community and under samples from another. This might be intentional or unintentional. The result is a model that is overrepresented by a particular characteristic, and as a result is weighted or biased in that way. The ideal sampling should either be completely random or match the characteristics of the population to be modeled.
  • Measurement bias. Measurement bias is the result of not accurately measuring or recording the data that has been selected. For example, if you are using salary as a measurement, there might be differences in salary including bonus or other incentives, or regional differences in the data. Other measurement bias can result from using incorrect units, normalizing data in incorrect ways or miscalculations.
  • Exclusion bias. Similar to sampling bias, exclusion bias arises from data that is inappropriately removed from the data source. When you have petabytes or more of data, it’s tempting to select a small sample to use for training, but when doing so you might be inadvertently excluding certain data, resulting in a biased data set. Exclusion bias can also occur due to removing duplicates from data when the data elements are actually distinct.
  • Experimenter or observer bias. Sometimes, the act of recording data itself can be biased. When recording data, the experimenter or observer might only record certain instances of data, skipping others. Perhaps you’re creating a machine learning model based on sensor data but only sampling every few seconds, missing key data elements. Or there is some other systemic issue in the way that the data has been observed or recorded. In some instances, the data itself might even become biased by the act of observing or recording that data, which could trigger behavioral changes.
  • Prejudicial bias. One insidious form of bias has to do with human prejudices. In some cases, data might become tainted by bias based on human activities that under-selected certain communities and over-selected others. When using historical data to train models, especially in areas that have previously been rife with prejudicial bias, care should be taken to make sure new models don’t incorporate that bias.
  • Confirmation bias. Confirmation bias is the desire to select only the information that supports or confirms something you already know, rather than data that might suggest something that runs counter to preconceived notions. The result is data that is tainted because it was selected in a biased manner or because information that doesn’t confirm the preconceived notion is thrown out.
  • Bandwagoning or bandwagon effect. The bandwagon effect is a form of bias that happens when there is a trend occurring in the data or in some community. As the trend grows, the data supporting that trend increases and data scientists run the risk of overrepresenting the idea in the data they collect. Moreover, any significance in the data may be short-lived: The bandwagon effect could disappear as quickly as it appeared.

There are no doubt other types of bias that might be represented in the data set than just the ones listed above, and all those forms should be identified early in the machine learning project.

Six ways to reduce bias in machine learning

1. Identify potential sources of bias. Using the above sources of bias as a guide, one way to address and mitigate bias is to examine the data and see how the different forms of bias could impact the data being used to train the machine learning model. Have you selected the data without bias? Have you made sure there isn’t any bias arising from errors in data capture or observation? Are you making sure not to use an historic data set tainted with prejudice or confirmation bias? By asking these questions you can help to identify and potentially eliminate that bias.

2. Set guidelines and rules for eliminating bias and procedures. To keep bias in check, organizations should set guidelines, rules and procedures for identifying, communicating and mitigating potential data set bias. Forward-thinking organizations are documenting cases of bias as they occur, outlining the steps taken to identify bias, and explaining the efforts taken to mitigate bias. By establishing these rules and communicating them in an open, transparent manner, organizations can put the right foot forward to address issues of machine learning model bias.

3. Identify accurate representative data. Prior to collecting and aggregating data for machine learning model training, organizations should first try to understand what a representative data set should look like. Data scientists should use their data analysis skills to understand the nature of the population that is to be modeled along with the characteristics of the data used to create the machine learning model. These two things should match in order to build a data set with as little bias as possible.

4. Document and share how data is selected and cleansed. Many forms of bias occur when selecting data from among large data sets and during data cleansing operations. In order to make sure few bias-inducing mistakes are made, organizations should document their methods of data selection and cleansing and allow others to examine when and if the models exhibit any form of bias. Transparency allows for root-cause analysis of sources of bias to be eliminated in future model iterations.

5. Evaluate model for performance and select least-biased, in addition to performance. Machine learning models are often evaluated prior to being placed into operation. Most of the time these evaluation steps focus on aspects of model accuracy and precision. Organizations should also add measures of bias detection in their model evaluation steps. Even if the model performs with certain levels of accuracy and precision for particular tasks, it could fail on measures of bias, which might point to issues with the training data.

6. Monitor and review models in operation. Finally, there is a difference between how the machine learning model performs in training and how it performs in the real world. Organizations should provide methods to monitor and continuously review the models as they perform in operation. If there are signs that certain forms of bias are showing up in the results, then the organization can take action before the bias causes irreparable harm.

Combating machine learning bias makes for more robust systems

When bias becomes embedded in machine learning models, it can have an adverse impact on our daily lives. The bias is exhibited in the form of exclusion, such as certain groups being denied loans or not being able to use the technology, or in the technology not working the same for everyone. As AI continues to become more a part of our lives, the risks from bias only grow larger. Companies, researchers and developers have a responsibility to minimize bias in AI systems. A lot of it comes down to ensuring that the data sets are representative and that the interpretation of data sets is correctly understood. However, just making sure that the data sets aren’t biased won’t actually remove bias, so having diverse teams of people working toward the development of AI remains an important goal for enterprises.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence