Author: Naman Bansal, Chirag Agarwal, Anh Nguyen
Publisher: N/A
Publication Year: 2020
Summary: The following article provides examples and an accompanying talk that illustrates how in many classification machine learning systems, even more explainable models, not black box, are highly susceptible to manipulation and silly mistakes. These models, which seek to show how they classify or recognize an image, can give vastly different results with small hyperparameter tuning, especially with noisy data. Data in the real world is not likely to closely mirror training data, and even with these models being trained on huge volumes of data, it is impossible to cover every case. Additionally, as these examples show, small changes in the model can have drastic effects on how the algorithm works. It is important to recognize that even when models are explainable and transparent, they may still not be good at classifying, and that we understand the impact of wrong decisions an algorithm may make for seemingly no reason.