At Google Cloud, A.I. Ethics Requires ‘Iced Tea’ and ‘Lemonaid’

Author: Jeremy Kahn

Publisher: Fortune

Publication Year: 2022

Summary: The following article discusses how in Google’s Cloud business, manufacturing customers desired using Google’s computer vision to search for and detect defects in their products on their assembly lines. However, many of these customers did not have enough data to train a defect detector because these defects were either rare events or did not occur in the same fashion every time. Eventually, Google succeeded in creating an algorithm that could detect any defect in products even if they had not previously trained on the product. Creating a “universal” algorithm is the key to artificial intelligence (AI) being able to impact areas that do not have extensive data and can be a building block for making businesses perform better than before. However, these incredibly sophisticated AI come with many ethical pitfalls. Suppose an algorithm is trained on limited data, many biases lurking within, with the best safeguard being human review. It is necessary to ask how ethics is considered in each part of the model building and implementation part of the process and test to find where and how the algorithm can be biased and fail. The creators of AI must keep the AI honest and ethical. The power of AI can be used to accomplish incredible feats, though data scientists must be ready to keep a leash on it and pull back when necessary.