Click here to access original source
Author: Cade Metz
Publisher: New York Times
Year: 2023
Summary: The article discusses the idea of how unusual AI Chatbot responses that have been evident in various news articles are really due to the humans using them. Dr. Terry Sejnowski, a neuroscientist and computer scientist who helped lay the groundwork for modern AI, mentioned that the chatbot can essentially act how you want it to act, whether it be angry, creepy, or even violent. Long conversations with the chatbot can drive it to act that way, which is why companies like Microsoft placed a limit on the length of discussion with the Bing chatbot, which had issues with giving misinformation and exhibiting intrusive, creepy behavior after long conversations. What interested me was the analogy they used the Harry Potter’s Mirror of Erised, which shows the deep desires of anyone staring at it, instead of what people think to be the truth. The longer someone stared into it, the madder they became. While we as data professionals may be tempted by using these chatbots, it’s important to be warned that they can be flat-out nonsensical because they are trying so hard to produce anything that sounds like human language.