Author: Katherine Miller
Publisher: Stanford University, Human-Centered Artificial Intelligence
Publication Year: 2022
Summary: The following article is concerned with language models. It briefly describes 3 ethical problems that are common with these models. First, they have a propensity to produce toxic output when given certain inputs. Second, they very rarely respond to prompts in a truthful way (typically only 25% of the time). Third, language models used for translating from English to other languages often exhibit gender biases. For example, they are more likely to mistranslate sentences that refer to a doctor as “she” instead of “he.”
Leave a Reply