Author: Mary Pratt
Publisher: TechTarget
Publication Year: 2021
Summary: The following article features artificial intelligence (AI) experts who pointed to a Microsoft-designed chatbot called Tay as an example of how bias works and how it can hurt a business. Microsoft used machine learning and natural language processing technologies to create Tay, a chatbot meant to learn and engage with the online community as if it were a teenaged girl, and released it onto Twitter in 2016. Online trolls quickly bombarded the bot with racist, misogynistic and antisemitic language. The overrepresentation of hate speech and a lack of rules preventing the chatbot from learning and repeating that language quickly led Tay to post harmful messages. As a result, Microsoft suspended the experiment the same day. Experts said that incident shows how AI bias can hurt companies. Experts highlighted 5 specific ways AI bias can be detrimental to an organization: 1). Ethical issues; 2). Reputational damage; 3). Lost opportunities; 4). Lack of trust from users; and 5). Regulatory and compliance problems.