Bad, biased, and unethical uses of AI

Author: Anthony Macciola

Publisher: The Enterprisers Project

Publication Year: 2019

Summary: The following article provides 4 examples of bad, biased, or unethical uses of AI, as well as guidance for Chief Information Officers (CIOs) on how to avoid running into these issues. The first example describes a mortgage lending algorithm that perpetuated the effects of redlining by collecting ZIP codes, assuming race according to the demographics of the user’s current neighborhood, and recommending only neighborhoods with similar demographics. The second example described the Amazon hiring software that had to be scrapped because it discriminated against candidates who were women. It also explains that LinkedIn has learned from this mistake by correcting potential gender bias. The third example was regarding search algorithms. Notably, UCLA professor Safiya Umoja Noble wrote “Algorithms of Oppression” after searching the term “Black Women” on google and turning up with pages filled with pornography, an issue which has fortunately since been corrected. The fourth example describes a medical school admissions algorithm that ended up accidentally excluding nearly all minority and female applicants.

The article goes on to cite examples of steps that tech companies have made to address the ethical use of data, and finally concludes by posing the following four questions for CIOs: 1). Is the data behind your AI technology good, or does it have algorithmic bias? 2). Are you vigorously reviewing AI algorithms to ensure they’re properly tuned and trained to produce expected results against pre-defined test sets? 3). Are you adhering to transparency principles in how AI technology impacts the organization internally and customers and partner stakeholders externally? and 4). Have you set up a dedicated AI governance and advisory committee that includes cross-functional leaders and external advisers that will establish and oversee the governance of AI-enabled solutions?