Keywords: Access, Automation, Discrimination, Environmental Harms, Exclusion, Information Hazards, Language Models, Malicious Uses, Misinformation, Risks, Toxicity
Author: Laura Weidinger et. al.
Publisher: DeepMind
Publication Year: 2021
Summary: The following research paper helps organize the risk landscape associated with large-scale Language Models (LMs). To advance responsible innovation, a thorough understanding of the potential risks posed by these models is required. This report discusses 21 different types of ethics risks, as well as the sources of those risks and potential risk mitigation strategies. It categorizes harms from different models into: Discrimination, Exclusion and Toxicity, Information Hazards, Misinformation Harms, Malicious Uses, Human-Computer Interaction Harms and Automation, Access, and Environmental Harms.