The Ethics of Psychological Artificial Intelligence: Clinical Considerations

Author: Russell Fulmer, Tonya Davis, Cori Costello, Angela Joerin

Publisher: Wiley Online Library

Publication Year: 2021

Summary: In the following article, Russell Fulmer, Tonya Davis, Cori Costello, and Angela Joerin, researchers at Northwestern University, discuss how artificial intelligence (AI) has been impacting clinical psychology practices. They cover 6 main issues when it comes to psychological AI: 1). Boundaries of Competence: People are now trying to use AI systems without truly understanding what the output is telling them. Training must be included in standard university psychology education; 2). Limited Ethics Codes: AI is brand new technology, so there are not many systems in place to ensure people are adhering to ethical standards; 3). Transparency: Once AI is more commonly used in a clinical setting, do doctors and counselors have a moral obligation to disclose that to their clients? That is a sticky area that must be hashed out; 4). Cultural Diversity: There are many famous news stories about AI discriminating against people because the training data only consisted of certain ethnicities and backgrounds; 5). Reliability and Validity: As data scientists, we know that models are never more accurate than about 80%. However, common people do not know this and may take classifications such as “depressed” or “anxious” from a psychological AI system to be factual; and 6). Cybersecurity: AI systems are hard to protect. There are many moving components and ways hackers can bypass security. It is important that these issues are solved to protect the confidentiality of therapy sessions.