Author: Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS)
Publisher: University of Pennsylvania
Publication Year: N/A
Summary: The following paper explores the potential risks of AI and provides a standardized practical categorization of these risks: data-related risks, AI/ML attacks, testing & trust, and compliance. AI governance frameworks could help organizations learn, govern, monitor, and mature AI adoption. 4 core components of AI governance are: definitions, inventory, policy/standards, and a governance framework, including controls. While there is no one-size-fits-all approach, practice institutions might consider adopting to mitigate AI risk include oversight and monitoring, enhancing explainability and interpretability, as well as exploring the use of evolving risk-mitigating techniques like differential privacy, and watermarking, among others.