Author: Kevin Peachey
Publisher: BBC News
Publication Year: 2019
Summary: The following article describes how in November 2019, a tech entrepreneur reported that he was approved credit 20 times more on his Apple Card than his wife, even though they have equal shares in their property and file joint tax returns, and she even had a better credit score than him. Regulators began to investigate and Goldmann Sachs, the bank that runs the Apple Card, was heavily criticized. As it turned out, the artificial intelligence (AI) algorithm Goldmann Sachs was using to approve borrowing amounts for Apple Card users was biased against women. Goldmann Sachs did not collect applicant gender, race, or age, which would have been illegal. However, the algorithm learned, based on historical data, to approve less for people with backgrounds or occupations that are associated with women. When AI algorithms are trained on historical data of who has been approved in the past, those human biases are carried over and reinforced in the decisions made by AI. Some options for mitigating this could be to tell a customer why a decision has been made, and which elements of their data were the most important. However, there is little agreement on the best way to do this. Another possibility is to train algorithms on less specific information to reduce the possibility of the algorithm picking up on proxy variables like gender or race. However this could make the AI less precise and thus less useful for decision-making. Each approach has its pros and cons.