Author: Charles Q. Choi.

Publisher: IEEE Spectrum

Publication Year: 2021

Summary: The following article details 7 common flaws in AI models and many real-world examples of these failures. It touches particularly on neural network models, but many topics are applicable broadly, particularly in classification scenarios. For example, it details model brittleness; how models can quickly be fooled by a pattern they have not seen before, even though it may be trivial for a human. Models should certainly be tested with lots of adversarial cases, but professionals must also plan for rare events absolutely fooling the model. Other topics include a lack of explainability, neural networks forgetting things they learned earlier, algorithmic bias, failing to accurately quantify uncertainty, and an inability for neural networks to do math. Not to much faith should be put in models and there should always be plans and systems in place for what happens when models fail.