Author: Cem Dilmegani

Publisher: AI Multiple

Publication Year: 2022

Summary: The following article describes how in the imperfect world, AI can’t be expected to be completely unbiased. However, there are various ways to minimize bias by proper testing of algorithms and building AI systems with responsible AI principles. Fixing bias in AI systems can be summarized in the following 6 steps: 1). Fathom the algorithm and data (make sure the training data is free of sampling bias, perform subpopulation analysis for a specific group in the data, and monitor the model over time so the model learns as training data changes); 2). Establish a de-biasing strategy in the technical, operational, and organizational aspects (tools, internal teams/third-party auditors, and transparent workplace culture); 3). Improve human-driven processes as you see bias (this includes training, process, and changes in culture to reduce bias); 4). Determine use cases for automated decision-making (versus human involvement); 5). Multidisciplinary approach (including subject matter experts); and 6). Diversify the organization (a diverse team can aid in noticing bias issues). In addition to these 6 strategies, there are various tools and frameworks (AI fairness 360 and IBM Watson OpenScale) that detect and mitigate bias.