How Will Self-Flying Aircraft Make Ethical Choices?

Author: Thom Patterson; Publisher: Flying; Publication Year: 2022. The following article looked at the future of flying aircraft. Currently, companies are working on creating self-flying aircraft which could be beneficial for the future. Companies use data in order to train self-flying aircraft to have them take a certain path to reach the given destination. These aircraft get better each time after doing repetitive tasks…

Best Practices for Avoiding AI Biases in Data and Why It’s Important

Author: Sunil Yadav; Publisher: Baseline; Publication Year: 2022. The following article discusses how technology is created by humans and often reflects human biases. It is important to prioritize having unbiased artificial intelligence (AI) algorithms and models, because biased AI systems can produce erroneous and discriminatory predictions, and impact a business’s reputation, future opportunities, and…

How the Responsible Use of AI can Create Safer Online Spaces

Author: Steve Durbin; Publisher: World Economic Forum; Publication Year: 2022. The following article discusses how although artificial intelligence (AI) promises to improve and streamline business operations and everyday life, there are proportional increasing concerns about the implementation of the technology. In order to counteract possible negative effects of AI, data scientists need to account for “inbuilt prejudices” that…

A Beauty Contest was Judged by AI and the Robots Didn’t like Dark Skin

Author: Sam Levin; Publisher: The Guardian; Publication Year: 2016. The following article discusses how in 2016, the first artificial intelligence (AI) judged beauty contest was conducted. The objective factors included facial symmetry and wrinkles. Beauty.AI, created by Youth Laboratories and supported by Microsoft, received 6,000 submissions from over 100 countries to identify those who resembled “human beauty…

Can We Protect AI from Our Biases?

Author: Robin Hauser; Publisher: TED; Publication Year: N/A. In the following talk, Robin Hauser talks about unconscious bias in artificial intelligence (AI) algorithms. As part of producing her new movie about unconscious bias, she became interested in finding out whether it would be possible to create AI without bias. As she came to find out, it is oftentimes harder to create unbiased algorithms for multiple…

Automated Translation is Hopelessly Sexist, but Don’t Blame the Algorithm or the Training Data

Author: Nicolas Kayser-Bril; Publisher: Algorithm Watch; Publication Year: N/A. The following article discusses how automated translation services tend to erase women or reduce them to stereotypes. Simply tweaking the training data or the models is not enough to make translations fair. An interesting discussion of the language translation datasets and their quirks that lead to sexism in different ways and different languages. The…

SAM: The Sensitivity of Attribution Methods to Hyperparameters

Author: Naman Bansal, Chirag Agarwal, Anh Nguyen; Publisher: N/A; Publication Year: 2020. The following article provides examples and an accompanying talk that illustrates how in many classification machine learning systems, even more explainable models, not black box, are highly susceptible to manipulation and silly mistakes. These models, which seek to show how they classify or recognize an image, can give vastly different results with…

Increasing Transparency in Perspective’s Machine Learning Models

Author: Lucy Vasserman, John Cassidy; Publisher: Medium; Publication Year: 2019. The following article discusses how the Jigsaw team at Google analyzed Perspective API’s toxicity model, which gives toxicity scores to online comments from a variety of sources. In the training data, these comments are rated by humans, who score the models on a toxicity scale with some other optional attributes. In this post about model…

HireVue Assessments and Preventing Algorithmic Bias

Author: Loren Larson; Publisher: HireVue; Publication Year: 2018. The following article discusses how HireVue is committed to good science that creates a level playing field for all candidates. Without deliberately working to reduce bias that may reside in an algorithm’s training data or its data scientist creators, algorithms are absolutely at risk of inheriting the biases of humans. If Hirevue finds a feature that indirectly…

Justice for “Data Janitors”

Author: Lilly Irani; Publisher: Public Books; Publication Year: 2015. The following article describes how when things in the technology world become automated, the work that is being “replaced” is not actually replaced, but displaced. For example, a manufacturing process that has been automated may replace the individual workers, but over the long term, it just displaces them to monitor the machines and facilitate…