Author: N/A
Publisher: Deloitte.
Publication Year: 2019
Summary: The following publication emphasizes the need for artificial intelligence (AI) that is transparent and responsible in that it has been thoroughly tested, is explainable, and has undergone many ethical considerations. It also presents 4 key guidelines for the development and use of responsible AI: 1). To ensure AI is transparent, ensure it is explainable – even in the cases of “black box” machine learning algorithms, it is possible and necessary to explain how a decision was made by an AI model; 2). AI algorithms should be screened for bias – developers should check whether certain groups are under or overrepresented in outcomes and tweak the model accordingly to detect hidden biases in data; 3). Company leadership needs to understand the technologies that analysts and developers use for decision-making- often, executives have an idea of what algorithms do, but not a clear understanding, which can cause risks; and 4). Ethics should be embedded in the organization. Ethical AI is demanded not only of customers, but often of employees as well. Companies should make their values around the use of AI clear and stand by them.
Importantly, this resource includes a helpful table showing examples of inspection methods for different kinds of models, including neural networks, tree-based methods, discriminant-based methods, instance-based methods, and generative methods. Finally, the resource provides information on Deloitte’s GlassBox toolkit, which offers both open source and in-house tools to help companies validate AI algorithms, expose possible bias, and help explain the decision-making process of AI models.