We have become so used to the invisible hands of Google, Facebook and Amazon that we feed them with data every day. As a result, algorithms make our everyday life easier, no matter whether we get a suitable film suggested on Netflix, the best playlist displayed at Spotify, avoid traffic jams with GPS navigation or get inspired for search entries with Google’s autocomplete.
But algorithms do much more than just satisfy our everyday needs. They got indispensable in many areas of society. They can diagnose diseases before the first signs of symptoms, according to studies traffic deaths can be reduced by up to 90 percent and their calculations can ensure that production will be more resource-efficient and environmentally friendly.
The great social responsibility given to algorithms leads to the following question: what happens when algorithms make mistakes?
The problem is algorithmic bias. Algorithms are created by human imagination and therefore anything but objective. This means that human prejudices can be unintentionally implemented when creating an algorithm. When feeding training data or setting the rules of algorithms subjective opinions get transferred into the system. This can lead to wrongly formed patterns and, above all, to discrimination against individuals.
A simple example that shows the possible consequences of algorithmic bias: Google’s face recognition tool wrongly labeled a picture of a black person as a gorilla in 2015, because the training data consisted mainly of pictures of white people. For this reason, the algorithm used a data set that had a higher scoring probability. The data set of animals. The following examples show how far these effects can go.
Algorithms are used to detect creditworthiness of customers.
Instead of conventional lending criteria, such as financial history, algorithms are fed with increasingly unknown data sets to determine creditworthiness.
E.g. the content of social media accounts, how much time someone spent reading terms and conditions as well as typing speed.
In Finland, a company was reported using an algorithm that denied a man credit because of his language.
Algorithms are used to help decide whether a child is at risk in their home.
The algorithm evaluates data about families according to risk levels. This puts families living in poverty at a disadvantage as more data is kept about them simply by poverty and social welfare. Poor families could then be classified as high-risk more often than wealthier families.
In the U.S. people complain unfair predictive software may have influenced the investigator’s recommendation to take their children away.
Predictive policing uses algorithms to identify potential risk factors.
The software calculates the duration of a prison sentence, decides which areas should be specially monitored, predicts the risk posed by certain persons and assesses people according to their propensity to violence.
The software COMPAS used by U.S. courts wrongly labeled black people as reoffenders at nearly twice the rate of white people.
The algorithm could base its decision on parameters like your gender, age, neighborhood, religion, sexual orientation, social status and even data which has been collected in a hidden way for example your speed of typing or your spelling.
Algorithms score inaccessible. Usually you cannot know which data has a positive or negative impact on the final decision. So you won’t be able to take influence.
Most of the time you get the result without understanding the decision. If you do get the loan you can be happy and go on. If you don’t get it you might ask yourself why. This could be the result of a biased algorithm.