Author: Beatriz Suarez

Publisher: Viterbi Conversations in Ethics

Publication Year: 2021

Summary: In the following article, the author discusses the ethics of voice assistants like Siri and Alexa. The way they are currently designed, voice assistants enable and even encourage certain stereotypes in their users. The social relationship we have with computers is something that has been recognized for two decades in the โ€œComputers are Social Actorsโ€ or โ€œCASAโ€ paradigm. One of the studies supporting this idea is the B.J. Fogg and Clifford Nass study on computer flattery and perceived computer performance. They found that participants working with computers that used flattery reported higher computer performance compared to participants with computers that did not use flattering language, even though the actual performance levels of the computers were the same. In the analysis of their discoveries, Fogg and Nass emphasize that human-computer relationships are fundamentally social in nature, aligning with the core concept of the CASA paradigm. A Stanford study also looked at the social relationship we have with computers and concluded: 1). social norms are applied to computers; 2). voices are social actors; 3). computers are gendered social actors; 4). gender is an extremely powerful cue; 5). social responses are automatic and unconscious; 6). integration is highly consequential; 7). uniformity of interface is double-edged; and 8). gender of voices is highly consequential. These findings show just how powerful the suggestions from voice assistants can be. People think of Alexa and Siri as “her,” meaning our interactions with them impact our mental schemas regarding women. Ordering Alexa and Siri to do things can subconsciously cause us to assume it is okay to order women around. This sounds silly, but the human brain is hardwired to make fast associations and decisions to save glucose. This is how bias is caused, the brain makes assumptions about others without much conscious awareness. When considering this perspective, it is easier to see how voice assistants can cause bias. The key to this problem, like many problems, is to practice mindfulness when using technology to ensure biases do not form.