Fixing This Problem Requires Leaping Onto a Slippery Slope: Should We?

Nicholas Kristof has sounded the alarm on the growing problem of artificial intelligence deepfakes on line. I must admit, I was unaware of the extent of the phenomenon, which is atrocious. He writes in part,

[D]eepfake nude videos and photos …humiliate celebrities and unknown children alike. One recent study found that 98 percent of deepfake videos online were pornographic and that 99 percent of those targeted were women or girls…Companies make money by selling advertising and premium subscriptions for websites hosting fake sex videos of famous female actresses, singers, influencers, princesses and politicians. Google directs traffic to these graphic videos, and victims have little recourse.

Sometimes the victims are underage girls….While there have always been doctored images, artificial intelligence makes the process much easier. With just a single good image of a person’s face, it is now possible in just half an hour to make a 60-second sex video of that person. Those videos can then be posted on general pornographic websites for anyone to see, or on specialized sites for deepfakes.

The videos there are graphic and sometimes sadistic, depicting women tied up as they are raped or urinated on, for example. One site offers categories including “rape” (472 items), “crying” (655) and “degradation” (822)….In addition, there are the “nudify” or “undressing” websites and apps …“Undress on a click!” one urges. These overwhelmingly target women and girls; some are not even capable of generating a naked male. A British study of child sexual images produced by artificial intelligence reported that 99.6 percent were of girls, most commonly between 7 and 13 years old.

Yikes. These images don’t qualify as child porn, because the laws against that are based on the actual abuse of the children in the photos. With the deepfakes, no children have been physically harmed. Right now, there are no laws directed at what Kristof is describing. He also links to two websites on the topic started by young women victimized with altered photos and deepfaked videos of them being spread on line: My image My choice, and AI Heeelp!

Continue reading

Icky Or Unethical? Alexa Is Learning A New Trick

From Ars Technica:

Amazon is figuring out how to make its Alexa voice assistant deepfake the voice of anyone, dead or alive, with just a short recording. The company demoed the feature at its re:Mars conference in Las Vegas on Wednesday, using the emotional trauma of the ongoing pandemic and grief to sell interest.

Amazon’s re:Mars focuses on artificial intelligence, machine learning, robotics, and other emerging technologies, with technical experts and industry leaders taking the stage. During the second-day keynote, Rohit Prasad, senior vice president and head scientist of Alexa AI at Amazon, showed off a feature being developed for Alexa.

After noting the large amount of lives lost during the pandemic, Prasad played a video demo, where a child asks Alexa, “Can grandma finish reading me Wizard of Oz?” Alexa responds, “Okay,” in her typical effeminate, robotic voice. But next, the voice of the child’s grandma comes out of the speaker to read L. Frank Baum’s tale.

Continue reading

Presenting The Complete Fake Voice Ethics Verdicts

Voiceprint

In Roadrunner: A Film About Anthony Bourdain, filmaker Morgan Neville,examines the life and death of the famous TV chef Bourdain. In the process of doing so, he introduced a new documentary device: using Artificial Intelligence to simulate Bourdain’s voice.

In a recent interview with the New Yorker, Neville explained that he used AI to synthetically create a voiceover reading of a Bourdain email that sounded like Bourdain was the reader. He engaged a software company and provided about a dozen hours of recordings, allowing them to create a convincing electronic version model of Bourdain’s voice. That voice reads three lines in the film, including an email sent to a friend by Bourdain: “My life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” But Bourdain, of course, never read that or any of the other three lines, to which Neville’s message to viewers is “Nyah, nyah, nyah!” “If you watch the film … you probably don’t know what the other lines are that were spoken by the AI, and you’re not going to know,” he said.

Well, critics, including Ottavia Bourdain, the chef’s former wife, objected to the ethics of an unannounced use of a “deepfake” voice to say sentences that Bourdain never spoke.

I was going to make this an Ethics Quiz, and then after thinking about for a few seconds, decided that the issue doesn’t rate a quiz, because I’m not in nay doubt over the answer. Is what Neville did unethical?

Yes, of course it is. It is unethical because it deliberately deceives listeners into believing that they are hearing the man talking when he never said the words they are hearing. It doesn’t mitigate the deception, as Neville and his defenders seem to think, that Fake Bourdain is reading the actual unspoken words in an email. It’s still deception. Is the creation and use of a zombie voice for this purpose also unethical, like the creation of CGO versions of famous actors to manipulate in movies they never made, discussed (and condemned) here?

That’s a tougher call, but I come down on the side of the dead celebrity who is being made into an unwilling ventriloquist’s dummy by emerging technology.

This would be a propitious time to point out what is ethical and what isn’t when it comes to using a dead celebrity’s voice, real or fake, in various forms of communications and education:

Continue reading