Ethical Issues Arising Due to Bias in Training A.I. Algorithms in Healthcare and Data Sharing as a Potential Solution

Author: Bilwaj Gaonkar Ph.D., Kirstin Cook B.A., Luke Macyszyn M.D.

Publisher: The AI Ethics Journal

Publication Year: 2020

Summary: The following article features authors that explain that each time a human makes a decision in modeling, human bias is introduced into the model in various degrees. They name 3 forms of bias that affect the deployment of machine learning algorithms in clinical practice: 1). Bias caused due to the characteristics of the collected data. Since clinical data is usually from electronic health records, any subgroups that are not included in this data will not be accurately predicted by the model. The data’s distribution needs to match that of the actual population; 2). Bias is caused by the annotator involved with the data. Two patients with similar symptoms may be treated differently by two different physicians. This means that models will learn how a specific physician, the one annotating the data, treats patients; 3). Bias is caused after the AI begins training itself in a feedback loop. The article says that models are effectively immortal, they can continue to learn and adapt for decades, unlike an actual physician. This gives communities with earlier deployed models an advantage over later starters, even if they have the same level of technology. The models will have learned that group’s genetics and patterns and will be able to use them for more accurate predictions.