Author: Douglas Heaven

Publisher: Nature

Publication Year: 2019

Summary: The following article discusses how deep learning artificial intelligence (AI) models, specifical deep neural networks, can be extremely brittle when presented with new cases. There is a possibility for these technologies to be maliciously attacked, such as by adding stickers to a stop sign to sabotage a car on autopilot. Cases of AI sabotage could even resemble classic computer security scenarios, for example if a hacker maliciously uses cloud-based language recognition algorithms to create a spam-dodging bot. Of course, malicious sabotage isn’t necessary to break these models, and data scientists must be aware of how noise can break models. Just as in computer security, there have to be systems in place for when these tools inevitably break and a plan for how to deal with them.