Author: Vince Lynch
Publisher: TechCrunch+
Publication Year: 2018
Summary: The following article discusses how bias in artificial intelligence (AI) can cause problems both because AI outputs are often blindly trusted, so any missed human bias could be spread, and because AI used in automated functions could spread bias without knowledge. The article names 3 ways to avoid bias in machine learning. 1). Choose the right learning model for the problem: Unsupervised learning can learn bias by mixing up two correlated features, i.e. Group A with Behavior B can often be learned as everyone who does Behavior B is in Group A. Supervised learning, on the other hand, involves much more human input, thus increasing the potential for human bias. It is better to find the problem beforehand than to deal with it after damage is caused; 2). Choose a representative dataset: It is important to find a single dataset representing potential minority groups. Using a different dataset can cause problems, and weighting subgroups can cause a model to pick up random noise as trends; and 3). Monitor performance using real data: Algorithms can work better in a controlled environment but cause issues after deployment. Data scientists should check to see if all groups of people are given the same result from the model, such as not basing loan rates based on someone’s ethnicity.