Author: Satish Gattadahalli
Publisher: STAT
Publication Year: 2020
Summary: The following article talks about the usage of artificial intelligence (AI) in health care which raises ethical issues that are paramount and fundamental in order to avoid harming patients, creating liability for health care providers, and undermining public trust in these technologies. The article talks about why AI bias should not be overlooked. Although algorithmic bias is not unique to predictive artificial intelligence, AI tools are capable of amplifying these biases and compounding existing health care inequalities. As health care systems increasingly adopt AI technologies, data governance structures must evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors. It talks about a data governance framework that can assist health care systems embrace artificial intelligence applications in ways that reduces ethical risks to patients, providers, and payers. Health care systems should operationalize AI strategy through a digital ethics steering committee comprised of the chief data officer, chief privacy officer, chief information officer, chief health informatics officer, chief risk officer, and chief ethics officer. Peer reviewers may include internal and external care providers, researchers, educators, and diverse groups of data scientists other than AI algorithm developers. A robust training plan must also underscore ethical and clinical nuances that arise among patients and care providers when using AI-based tools. Patients should also demand informed consent and transparency of data in algorithms, AI tools, and clinical decision support systems.