Author: N/A
Publisher: TRT World
Publication Year: 2018
Summary: The following video discusses how because algorithms are written by humans, they are not any more objective than we are. Some examples of algorithmic bias include: Amazon’s Alexa failing to recognize different accents or Google Translate associating certain jobs with certain genders. Both machine learning and deep learning are dependent on huge amounts of data and the people who train the algorithms. An image classification algorithm or a speech recognition algorithm will both be trained in millions of images or voice samples the algorithms are fed. The more data the algorithm is trained on, the better outcome we can expect. Algorithmic bias comes into play when the training data is not sufficient or not related to the scenarios at hand and therefore introduces blind spots. The person writing the algorithm also plays a crucial role in the process, that individual’s values and beliefs, ethnicity, and cultural intelligence all play a role. Companies bear the responsibility of avoiding algorithmic bias and ensuring the analysts writing these algorithms are doing so in the spirit of social equality.