Author: Jacob Ladd
Publisher: Michigan Technology Law Review
Publication Year: N/A
Summary: The following article discusses how artificial intelligence (AI) systems may be able to get past current discrimination laws through “proxy discrimination.” Proxy discrimination is the usage of a variable whose statistical significance for prediction derives from its correlation with membership in a suspect class. As an example, the article describes a hiring algorithm for a job where a person’s height is relevant to the job performance, but the algorithm does not have access to height data. While trying to factor height, the algorithm might discover a correlation between height and sex and then sex and other data. Proxy discrimination by AI is more likely to be unintentional than implemented by humans. In order to regulate and address the issue, the article provides 3 potential strategies: 1). Allowing AI models to collect data on individuals’ protected characteristics so that this data can be reported to regulators and/or the public; 2). Implementing “ethical algorithms” that use statistical methods to eliminate or correct for correlations between facially neutral characteristics and protected characteristics; and 3). Prohibiting all forms of discrimination except forms that are specifically allowed.