Author: Scott Murray

Publisher: MIT News

Publication Year: 2022

Summary: The following article discusses how facial recognition algorithms have been known to be less accurate when detecting darker-skinned individuals, resulting in harmful bias and wrongful use. Rather than combating systemic racism and social inequalities, artificial intelligence (AI) is seemingly perpetuating these disparities, demonstrating the great impact that flawed algorithmic models can have on societies for extended periods of time. One industry in which inequity is prevalent is healthcare as certain groups experience unequal access to quality care. Although data can provide valuable insight to providers for more effective and personalized care, users need to have primary authority over who accesses their information and how that information is used. Collaboration across expertise and advocacy for equality can promote a greater understanding of how data can be ethically used and hold data professionals accountable for removing racial biases in the models they create and disseminate.