Comment Of The Day Weekend Continues! Comment Of The Day : “Morning Ethics Warm-Up, 12/30/2017: Is Robert Mueller Biased? …Is President Trump A Robot?

A single line in this morning’s  Warm-Up sparked this fascinating exposition by Ash. Here was the context:

Jay Malsky, an actor who has appeared in drag as Hillary Clinton. Melsky, while watching  the Hall of Presidents attraction at Disney World, began shouting at the audio-animatronic Donald Trump. (The Huffington Post said he “mercilessly” heckled the robot, showing  derangement of its own. Robots don’t need mercy, and you can’t “heckle” one either.)

Here is Ash’s Comment of the Day, primarily a quote but a perfectly chosen one, on the post, Morning Ethics Warm-Up, 12/30/2017: Is Robert Mueller Biased? Are The Patriots Cheating Again? Is Larry Tribe Deranged? Is President Trump A Robot?: 

“Robots don’t need mercy, and you can’t “heckle” one either.”

You should date this and file it, because I guarantee you, the way you treat Siri and Alexa and Cortana and Ok Google is *already* being described as problematic.

“Sexual harassment: there are no limits…According to Dr Sweeney, research indicates virtual assistants like Siri and Amazon’s virtual assistant Alexa find themselves fending off endless sexual solicitations and abuse from users. But because humans don’t (yet) attach agency or intelligence to their devices, they’re remarkably uninhibited about abusing them. Both academic research and anecdotal observation on man/machine interfaces suggest raised voices and vulgar comments are more common than not. It’s estimated that about 10% to 50% of interactions are abusive, according to Dr. Sheryl Brahnam in a TechEmergence interview late last year.

“These behaviors are simply not sustainable. If adaptive bots learn from every meaningful human interaction they have, then mistreatment and abuse become technological toxins. Bad behavior can poison bot behavior. That undermines enterprise efficiency, productivity, and culture.

“That’s why being bad to bots will become professionally and socially taboo in tomorrow’s workplace. When “deep learning” devices emotionally resonate with their users, mistreating them feels less like breaking one’s mobile phone than kicking a kitten. The former earns a reprimand; the latter gets you fired.

“Just as one wouldn’t kick the office cat or ridicule a subordinate, the very idea of mistreating ever-more-intelligent devices becomes unacceptable. While not (biologically) alive, these inanimate objects are explicitly trained to anticipate and respond to workplace needs. Verbally or textually abusing them in the course of one’s job seems gratuitously unprofessional and counterproductive.

“Crudely put, smashing your iPhone means you have a temper; calling your struggling Siri inappropriate names gets you called before HR. Using bad manners with smart technologies can lead to bad management.”

9 Comments

Filed under Comment of the Day, Ethics Alarms Award Nominee, Etiquette and manners, Science & Technology, Workplace

9 responses to “Comment Of The Day Weekend Continues! Comment Of The Day : “Morning Ethics Warm-Up, 12/30/2017: Is Robert Mueller Biased? …Is President Trump A Robot?

  1. Wasn’t this the theme in “Westworld”? It’s okay to be abusive to machines, because they’re just machines….right? RIGHT??

    But honestly, I think there is a difference between say, hitting a punching bag, and hitting a picture of someone that you put on the punching bag. If you would never want to hurt someone in real life, then I would think hurting someone in fantasy shouldn’t appeal to you. If you WANT to hurt someone in real life, and you hurt them in fantasy instead, well…good for you for not doing it for real, but is the substitute enough gratification for you, or does it make hurting someone feel more “normal”? If the only reason you’re using a fantasy substitute is because you’re not “allowed” to do it in real life, I’d be concerned about where your ethical priorities really are.

    • Ash

      It definitely was a theme in Westworld, but I am pretty sure long before we get to Dolores levels of AI, we’re going to see demands that Google, Apple, et. al., monitor the language we use with their assistants, and actions by Google, Apple, et. al., showing a positive response to that.

      Or we’ll see subpoenas to Google, et. al., to examine our interactions with OK Google to show patterns of abuse, violence and misogyny. While many will be squashed some will get through, like this horrible warrant which a court granted “the case a 17-year-old boy who was asked to strip, then masturbate in front of police officers in Virginia.”

      https://gizmodo.com/forcing-teen-to-masturbate-for-police-photos-is-obvious-1821092318

  2. Here's Johnny

    “… there is a difference between say, hitting a punching bag, and hitting a picture of someone …”
    I think it’s a significant difference. For 20 years, I coached a high school rifle team, and, somewhat predictably, in 2003 some of my little darlings wanted to hang pictures of Saddam Hussein for their target practice. Absolutely, NO! We’ll stick with our bullseye targets. End of discussion.
    The taking of a human life is not something to be taken cavalierly, and doing it virtually merits considerable caution as well (which is why I don’t care for video games such as Call of Duty). Hillary Clinton dropped even lower, in my estimation, when she joked and (some say) giggled about the killing of Muammar Gaddafi.
    Abuse of objects that represent humans, such as a robots, seems similarly wrong.

  3. Ash

    Thanks, but I need to make clear my comment was the first line, everything after that was quotes.

    The paragraph about Dr. Sweeney came from abc.net.au
    http://www.abc.net.au/news/2017-08-11/why-are-all-virtual-assisants-female-and-are-they-discriminatory/8784588 an in editing I lost the link.

    The rest came from HBR.

  4. Ex Machina: [https://youtu.be/Wkgvzc1pvJw]

  5. “the very idea of mistreating ever-more-intelligent devices becomes unacceptable. While not (biologically) alive, these inanimate objects are explicitly trained to anticipate and respond to workplace needs. “

    Well if they don’t make coffee like the humans expect by 7 AM, they are going to get abused. This is a nonstarter.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s