Icky Or Unethical? Alexa Is Learning A New Trick

From Ars Technica:

Amazon is figuring out how to make its Alexa voice assistant deepfake the voice of anyone, dead or alive, with just a short recording. The company demoed the feature at its re:Mars conference in Las Vegas on Wednesday, using the emotional trauma of the ongoing pandemic and grief to sell interest.

Amazon’s re:Mars focuses on artificial intelligence, machine learning, robotics, and other emerging technologies, with technical experts and industry leaders taking the stage. During the second-day keynote, Rohit Prasad, senior vice president and head scientist of Alexa AI at Amazon, showed off a feature being developed for Alexa.

After noting the large amount of lives lost during the pandemic, Prasad played a video demo, where a child asks Alexa, “Can grandma finish reading me Wizard of Oz?” Alexa responds, “Okay,” in her typical effeminate, robotic voice. But next, the voice of the child’s grandma comes out of the speaker to read L. Frank Baum’s tale.

The article notes some (arguably) beneficial uses of deepfake technology, such as having the means to complete a vocal recording when a performer dies suddenly. [ Ethics Alarms discussed an example of that here] It doesn’t take much imagination, however, to think of many more uses that are unethical and even criminal. Does Amazon care? Of course not: companies want to make money, and scientists want to develop whatever new technology they can, only considering, in the words of Dr. Ian Malcolm (Jeff Goldblum…second appearance of Ian and Jeff on EA today) whether they could do it rather than whether they should.

Ars Techica adds,

Besides worries of deepfakes being used for scams, rip-offs, and other nefarious activity, there are already some troubling things about how Amazon is framing the feature, which doesn’t even have a release date yet.

Before showing the demo, Prasad talked about Alexa giving users a “companionship relationship.”

“In this companionship role, human attributes of empathy and affect are key for building trust,” the exec said. “These attributes have become even more important in these times of the ongoing pandemic, when so many of us have lost someone we love. While AI can’t eliminate that pain of loss, it can definitely make their memories last.”

Prasad added that the feature “enables lasting personal relationships.”


One of my favorite historical characters, prolific inventor Walter Hunt (1796-1859), famously was the first to develop a sewing machine but didn’t patent it for fear that it would put seamstresses out of work. The lesson is that it is usually impossible to assess the dangers and potential benefits of new technology at its origin, and thus it cannot be unethical to develop such innovations, and may be unethical to abort an innovation before it can mature and have a chance to show humanity what it can accomplish.

Alexa’s new trick, however, is definitely creepy. And icky.

9 thoughts on “Icky Or Unethical? Alexa Is Learning A New Trick

  1. Creepy yes…like when my brother-in-law gave our two-year-old a present he said her Nana had got her….six months after Nana died. That was creepy and confusing to a child.
    How easy it would be to make some political person’s voice say something they never did…or would….like right before an election. And given the media’s penchant for running headlines before doing any research or verification, the tangled webs woven could be unimaginable.

  2. Your mention of the sewing machine technopanic episode reminded me of how opposed many were to the Gutenberg printing press putting Monks out of work, or that Socrates had warned against writing because it would “create forgetfulness in the learner’s souls because they will not use their memory.” (He got that right, I can’t remember phone numbers, because I don’t need to – and I had to use a search engine to look up his quote, because I couldn’t remember it)

    To the example in the video, my mom had ALS. She lost her voice long before she lost the ability to walk or use a computer. At that time, all the available text to speech systems sounded like variations on Stephen Hawking. We would have welcomed a system that could have spoken with her voice.

  3. I would hate to have an Alexa voice made of me last weekend when I thought I was suffering from a cold (It was actually covid). I sounded terrible.

  4. Icky.

    My suspicion (and that’s all it is at this point) is that this type of thing could end up being a huge invasion of privacy, and it could also prolong grief.

  5. I never cease to be amazed by the creativity of the human mind.

    With two caveats the suppression of technology is unethical and foolhardy. My two caveats are that neither the acquisition of raw materials nor the testing and development of the technology should breach ethical standards.

    I do not believe inanimate objects in themselves can be unethical. What can be unethical is how people use the technology. Guns are a good example of this concept. Additionally, it is also foolhardy to suppress the development of new technology. If Fred doesn’t develop the technology, Sally, Ivan, Juan, etc. will.

    Regarding the icky part, I Given the potential and temptation for abuse are extreme this technology is horrifying beyond belief, but not icky.

  6. I was going to vote “icky,” but I considered, first, suppose I had a family member who wanted Alexa to have his/her voice. So long as it was voluntary, with the owner’s permission, I don’t think it’s unethical.

    Then I remembered visiting Disney’s Hall of Presidents at the ’64 World’s Fair in Queens and witnessing Disney’s first use of animatronics in the Hall of Presidents, though I think the only “real” voices came from FDR onward to JFK, using recordings of actual speeches. Earlier voices were clearly imagined, but generally used words that had actually been spoken. What a phenomenal engineering accomplishment by Disney’s “Imagineers!”

    Today, with Deep Fakery, the fact that it’s impossible to tell the difference between actual speech and generated speech from a keyboard entry. In general, I’d have to go with unethical.

    FD, you have my sincere sympathy for your Mom’s loss of speech early on in her journey throu Alzheimer’s. In that instance, and others like it, I think we have to consider it absolutely ethical.


  7. The issue here is one of packaging. Deepfakes of audio and video are fairly easy to produce with current technology. There are many websites and software packages readily available to create your own with little advanced computer skill required. The technology exists, and there is no way to close the barn door on its existence.
    Only ethics can control the use of this technology by individuals and groups. Unethical persons or state actors will abuse it. Ethical people will find innovative uses for it. No law can prevent its use; only punish inappropriate use expostfacto, likely after the damage to and individual’s reputation or society’s trust has been done.

    Amazon’s packaging of the technology with Alexis to make it widely accessible is arguably a plus for society. The particular use of emulating the voice of decreased loved ones may be icky, and have mental health impacts if over used. However, making the public broadly aware of the technology helps the public think critically of future abuses of the technology.

    Current artificial intelligence-produced deepfakes are often crude and easily discerned. However as technology improves, it may not be feasible to forensically detect whether a particular video or audio clip is fake. Rather, the investigators and the public at large will have to discern from context whether a video is fake or authentic. Only personal interaction with the technology can make the public aware of its implications and thus develop the skills needed to evaluate potential deepfakes critically.

  8. I’m not certain about anyone else, but I’ve been fairly good at pointing out a computer generated audio or video. The cadence is always a bit off, the inflections are slightly unnatural, sometimes the pronunciation isn’t quite right. I doubt I’ve picked out 100% of the computer generated audios I’ve heard, but it seems there’s always something a bit off with them. This may be something that can be refined out with future development, or it may represent some fundamental wall that a computer just won’t be able to break.
    I can see how this Alexa technology can help and hurt the grieving process. I would assume that hearing a simulation of a dead loved one’s voice would be optional. I would hope that Amazon wouldn’t start making a best fit computer voice without that person’s consent first, but previous Amazon examples would dash that hope. Is the technology inherently unethical? No. Would I trust Amazon to use it ethically and only ethically? Hard no.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.