Author: Phaedra Boinodiris

Publisher: IBM Technology

Publication Year: 2021

Summary: The following video discusses how artificial intelligence (AI) is making decisions that directly impact all of us. People assume that because AI is a machine, the AI is unbiased and creates a “correct” decision. There are 5 pillars to earning trust in an AI decision. The first pillar is fairness. Is AI fair to everyone? In particular, is AI being proper for historically underrepresented groups? The second pillar is explainable. Can you tell an end user how your team used data, methodologies, and expertise to train the model? The third pillar is robustness. This pillar is more about the security of the model. Can anyone hack into the model and make it benefit others unfairly? The fourth pillar is transparency. Are you telling people that an AI is being used to make the decision, and have you provided them with information so they can learn about the model? The final pillar is data privacy. Can you make sure that the data being input and output is safe?

IBM has come up with 3 principles to apply when using AI: AI should not replace human intelligence but rather augment our understanding and notice patterns we cannot; data belongs solely to the creator; and the entire AI lifecycle should be transparent and understandable. Again, it is essential to remember that AI is not just technology but a social problem. It concerns people (the diversity of the teams who make the AI), process (what standards will you publicly keep to your employees and the open market?), and tooling (what tools will you be using to ensure the five pillars of trust?). All of these come together and should be a backbone that rests on top of the technologies we make in AI.