Making It Easier to Compare the Tools for Explainable AI

Author: Surya Karunagaran

Publisher: Partnership on AI

Publication Year: 2022

Summary: The following article focuses on how Partnership on AI, inspired partially by Google Model Cards and Microsoft’s Datasheets for Datasets, has developed documentation on explainable artificial intelligence (AI) in the form of the XAI Toolsheet. Explainable AI comes from the desire to have more human-interpretable AI models, with better documentation on how models are trained, their purpose, and how they function. Their toolsheet contains 3 overall sections, those being metadata, utility, and usability. Metadata contains high-level information on the model, its developers, users, licensing, compatibility and documentation. Utility covers the type of model, type of training data, explainability at different points, problem type, and more. Finally, usability covers information about the final use cases for the model and how explainable it can be for different users. This framework is made for large-scale industry machine learning processes, but we can take these aspects of emphasizing explainability and expansive documentation for our own projects.