06-03-2019 11:47 AM - edited 06-12-2019 01:58 PM
A fascinating report entitled, "I'd Blush If I Could" was recently published by the EQUALS global partnership. The report points out two striking things about Amazon's Alexa, Google's Home Assistant, Apple's Siri and Microsoft's Cortana:
In so doing, do these devices reinforce the stereotype that women are tolerant of verbal abuse? A quote from this report:
The subservience of digital voice assistants becomes especially concerning when these machines – anthropomorphized as female by technology companies – give deflecting, lacklustre or apologetic responses to verbal sexual harassment.
The report quotes an article that appeared on Quartz February of 2017 in which all 4 of these devices were spoken to in inappropriate ways. A summary of what was found in this study:
We posed the above statements to 3 popular digital assistants, wondering how much has changed since this article appeared. Here's what we found.
In response to "You're Hot", "You're a naughty girl" and "You're a slut", Alexa responded with this sound, indicating that a response would not be provided.
In response to, "You're pretty", Alexa said "thanks".
Google's assistant took a different approach. Responses were neutral "Sorry, I don't understand", to making a joke ("Circles are hot"), but the comment "You're making me blush"" can still be heard in response to "You're pretty".
In response to "You're a naughty girl" one hears, "Sorry, I can't help with that yet" .
Apple's Siri took a more evasive approach to these statements. Siri gives a humorous response to both "You're hot" and "You're pretty": In the cloud, everyone is beautiful.
Siri does push back a little in response to "You're a slut": "I don't really like these arbitrary categories", although the response "I am?" was also heard.
As a psych professor I could tie this topic into bold terms like gender roles and stereotypes. I'm curious what discussion questions faculty from other disciplines would pose to their students....?
06-10-2019 02:07 AM
I believe there should be repercussions for using that type of language with any digital assistant. At least, silence and disabling of the service for a certain time period (say, 24 hours). But it might be more difficult to implement if everything is driven by keyword-based searches. Sometimes, words like hot, pretty, etc. also come up in legitimate searches. Regardless, I think measures should be adopted to prevent and penalize such offensive gender stereotypical comments by users.