Author: Tony Sun, Andrew Gaut, Shirlyn Tang, et al.
Publisher: Arxiv
Publication Year: N/A
Summary: The following article discusses how despite their success in modeling various applications, natural language processing (NLP) models propagate and may even amplify gender bias found in text corpora. While the study of bias in artificial intelligence is not new, methods to reduce gender bias in NLP are still in their early stages. The authors of this paper review recent studies on recognizing and mitigating gender bias in NLP. We discuss gender bias in terms of 4 types of representation bias and examine methods for detecting gender bias. The authors also discuss the benefits and drawbacks of existing gender debiasing methods. Finally, the authors discuss future research in NLP for recognizing and mitigating gender bias.