Author: Maureen Guarcello, Linda Feng, Shahriar Panahi, Szymon Machajewski and Marcia Ham

Publisher: Educause Review

Publication Year: 2021

Summary: The following article pertains to the ethics of using artificial intelligence (AI) and/or machine learning (ML) to predict student success. Many algorithms that we come across are relatively harmless, but those with an impact on a student’s success and career trajectory must be approached with extreme caution to avoid biases that may creep into the models. For instance, bias in these models has the potential to negatively affect minorities because there are far fewer data points to make a decision on for underrepresented groups. This is just one example of unethical practice. To combat this, the authors have a “call to action,” in which they suggest learning more about specific types of biases and how they can affect student success models. Likewise, they suggest gaining greater familiarity with learning management systems, and to “support data democratization.” Although bias is a very real phenomenon in student success analytics, it is the responsibility of the data scientist to support the students in such a way that bias is not present in AI or ML models that pertain to their success.