<script type="application/ld+json">
{
 "@context": "https://schema.org",
 "@type": "FAQPage",
 "mainEntity": [{
   "@type": "Question",
   "name": "What is an algorithmic bias?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "Algorithmic bias refers to the lack of fairness in the outputs generated by an algorithm. These biases may include age discrimination, gender bias, and racial bias."
   }
 },{
   "@type": "Question",
   "name": "What causes algorithmic biases?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "There are two prominent reasons why the training data could cause algorithmic biases. Firstly, it could be caused by personal biases that the data gatherers themselves hold. Secondly, it could be because of environmental biases that could have been imposed unintentionally (or even intentionally) while the data was being gathered."
   }
 },{
   "@type": "Question",
   "name": "How to fight algorithmic biases?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "Diversity is the key to solving algorithmic biases. Diversity efforts should be taken at every step of the project or process. While creating artificial intelligence systems, you need to make sure that the training data properly represents the actual scenarios that the algorithm is intended to be used in."
   }
 }]
}
</script>

Algorithmic bias

What is an algorithmic bias?

For many years, the world thought that artificial intelligence does not hold the biases and prejudices that its creators hold. Everyone thought that since AI is driven by cold, hard mathematical logic, it would be completely unbiased and neutral.

The world was wrong.

AI has, in many cases, manifested the biases that humans tend to hold. In some instances, it has even amplified these biases. 

Algorithmic bias refers to the lack of fairness in the outputs generated by an algorithm. These biases may include age discrimination, gender bias, and racial bias.

Algorithmic biases could intensify the inequalities among people and affect their lives. Another risk is that a person’s chances of landing a job that they deserve could be reduced, simply because they belong to a group that the algorithm is biased against.


What causes algorithmic biases?

A bias in the output generated by an algorithm would generally be because of the data that it was trained on. 

There are two prominent reasons why the training data could cause algorithmic biases. Firstly, it could be caused by personal biases that the data gatherers themselves hold. Secondly, it could be because of environmental biases that could have been imposed unintentionally (or even intentionally) while the data was being gathered.

The people who are gathering the data most likely have biases that they are not even aware of and they end up projecting those biases on the actual data collection processes.

The algorithm may not even be trained on enough data that can represent the actual scenario that the AI system is expected to operate in. For example, there have been instances where algorithms were trained on data pertinent only to caucasians. In those situations, the systems have ended up generating outputs that are racially biased.

Similarly, an artificial intelligence system may be trained on data sourced from and about one region, while the system is intended to be used worldwide. It would not be surprising that such a system would generate biased outputs.

Some examples of algorithmic biases

Some AI systems hold rather dangerous biases. In one horrifying example, Google Vision Cloud, using computer vision, labeled two images of people holding hand-held thermometers. One of the images had a dark-skinned person holding the thermometer, while the other had a light-skinned person holding the thermometer.

While the image with the light-skinned person was labeled “electronic device”, the image with the dark-skinned person was mislabeled “gun”. Such a bias is horrific and unacceptable. 

Fortunately, Google has updated its algorithm since then.

In another instance, Google Photos’ image recognition algorithms classified dark skinned people as “gorillas”. In a pathetic attempt to fix the issue, Google just stopped the algorithm from identifying gorillas at all.

When researchers at Princeton University used artificial intelligence to analyze 2.2 million words and create associations among them, they found some shocking results.

In the experiment, words like “girl” and “woman” were highly associated with arts while words like “men” were associated to a greater degree with science and math. The algorithm even perceived European names to be more pleasant than names of African-American people.


How to fight algorithmic biases?

Diversity is the key to solving algorithmic biases. Diversity efforts should be taken at every step of the project or process. 

While creating artificial intelligence systems, you need to make sure that the training data properly represents the actual scenarios that the algorithm is intended to be used in. 

AI ethics should be stressed on in organizations that build AI systems as well as in educational institutions. Organizations that build AI systems should focus on educating their employees on ethics and cultural differences. Everyone who builds, researches, or works on artificial intelligence systems should be aware of the dangers of algorithmic biases and work to avoid and correct them.

About Engati

Engati powers 45,000+ chatbot & live chat solutions in 50+ languages across the world.

We aim to empower you to create the best customer experiences you could imagine. 

So, are you ready to create unbelievably smooth experiences?

Check us out!

Algorithmic bias

October 14, 2020

Table of contents

Key takeawaysCollaboration platforms are essential to the new way of workingEmployees prefer engati over emailEmployees play a growing part in software purchasing decisionsThe future of work is collaborativeMethodology

What is an algorithmic bias?

For many years, the world thought that artificial intelligence does not hold the biases and prejudices that its creators hold. Everyone thought that since AI is driven by cold, hard mathematical logic, it would be completely unbiased and neutral.

The world was wrong.

AI has, in many cases, manifested the biases that humans tend to hold. In some instances, it has even amplified these biases. 

Algorithmic bias refers to the lack of fairness in the outputs generated by an algorithm. These biases may include age discrimination, gender bias, and racial bias.

Algorithmic biases could intensify the inequalities among people and affect their lives. Another risk is that a person’s chances of landing a job that they deserve could be reduced, simply because they belong to a group that the algorithm is biased against.


What causes algorithmic biases?

A bias in the output generated by an algorithm would generally be because of the data that it was trained on. 

There are two prominent reasons why the training data could cause algorithmic biases. Firstly, it could be caused by personal biases that the data gatherers themselves hold. Secondly, it could be because of environmental biases that could have been imposed unintentionally (or even intentionally) while the data was being gathered.

The people who are gathering the data most likely have biases that they are not even aware of and they end up projecting those biases on the actual data collection processes.

The algorithm may not even be trained on enough data that can represent the actual scenario that the AI system is expected to operate in. For example, there have been instances where algorithms were trained on data pertinent only to caucasians. In those situations, the systems have ended up generating outputs that are racially biased.

Similarly, an artificial intelligence system may be trained on data sourced from and about one region, while the system is intended to be used worldwide. It would not be surprising that such a system would generate biased outputs.

Some examples of algorithmic biases

Some AI systems hold rather dangerous biases. In one horrifying example, Google Vision Cloud, using computer vision, labeled two images of people holding hand-held thermometers. One of the images had a dark-skinned person holding the thermometer, while the other had a light-skinned person holding the thermometer.

While the image with the light-skinned person was labeled “electronic device”, the image with the dark-skinned person was mislabeled “gun”. Such a bias is horrific and unacceptable. 

Fortunately, Google has updated its algorithm since then.

In another instance, Google Photos’ image recognition algorithms classified dark skinned people as “gorillas”. In a pathetic attempt to fix the issue, Google just stopped the algorithm from identifying gorillas at all.

When researchers at Princeton University used artificial intelligence to analyze 2.2 million words and create associations among them, they found some shocking results.

In the experiment, words like “girl” and “woman” were highly associated with arts while words like “men” were associated to a greater degree with science and math. The algorithm even perceived European names to be more pleasant than names of African-American people.


How to fight algorithmic biases?

Diversity is the key to solving algorithmic biases. Diversity efforts should be taken at every step of the project or process. 

While creating artificial intelligence systems, you need to make sure that the training data properly represents the actual scenarios that the algorithm is intended to be used in. 

AI ethics should be stressed on in organizations that build AI systems as well as in educational institutions. Organizations that build AI systems should focus on educating their employees on ethics and cultural differences. Everyone who builds, researches, or works on artificial intelligence systems should be aware of the dangers of algorithmic biases and work to avoid and correct them.

Share

Continue Reading