Author: Karen Hao
Publisher: MIT Technology Review
Publication Year: 2019
Summary: The following article discusses how artificial intelligence (AI) furthers the “filter bubble” online. Users tend to interact with content that they enjoy, which allows recommendation systems to provide more of that content. The potential issue with this is that the other side of the story is increasingly left out of results. Users are fed what they agree with, which reinforces their opinions and forms a sort of echo chamber. The more accurately a model recommends content we are interested in, the faster it traps us in an information bubble. When building these models, there is a tradeoff between low accuracy allowing new content to be recommended, allowing users to explore, and high accuracy, satisfying users by ensuring they will like what they see. Many researchers attribute rapid polarization of groups (politics, etc.) in recent times to these models and even search engines.