Presenting The Complete Fake Voice Ethics Verdicts

Voiceprint

In Roadrunner: A Film About Anthony Bourdain, filmaker Morgan Neville,examines the life and death of the famous TV chef Bourdain. In the process of doing so, he introduced a new documentary device: using Artificial Intelligence to simulate Bourdain’s voice.

In a recent interview with the New Yorker, Neville explained that he used AI to synthetically create a voiceover reading of a Bourdain email that sounded like Bourdain was the reader. He engaged a software company and provided about a dozen hours of recordings, allowing them to create a convincing electronic version model of Bourdain’s voice. That voice reads three lines in the film, including an email sent to a friend by Bourdain: “My life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” But Bourdain, of course, never read that or any of the other three lines, to which Neville’s message to viewers is “Nyah, nyah, nyah!” “If you watch the film … you probably don’t know what the other lines are that were spoken by the AI, and you’re not going to know,” he said.

Well, critics, including Ottavia Bourdain, the chef’s former wife, objected to the ethics of an unannounced use of a “deepfake” voice to say sentences that Bourdain never spoke.

I was going to make this an Ethics Quiz, and then after thinking about for a few seconds, decided that the issue doesn’t rate a quiz, because I’m not in nay doubt over the answer. Is what Neville did unethical?

Yes, of course it is. It is unethical because it deliberately deceives listeners into believing that they are hearing the man talking when he never said the words they are hearing. It doesn’t mitigate the deception, as Neville and his defenders seem to think, that Fake Bourdain is reading the actual unspoken words in an email. It’s still deception. Is the creation and use of a zombie voice for this purpose also unethical, like the creation of CGO versions of famous actors to manipulate in movies they never made, discussed (and condemned) here?

That’s a tougher call, but I come down on the side of the dead celebrity who is being made into an unwilling ventriloquist’s dummy by emerging technology.

This would be a propitious time to point out what is ethical and what isn’t when it comes to using a dead celebrity’s voice, real or fake, in various forms of communications and education:

Ethical: using the celebrity’s voice in a manner that the dead celebrity explicitly approved, when it involves no deception of the audience.

Ethical: Using an actor to read what was written or spoken by a dead celebrity when what is being read was said or written in the same context in which the film or documentary reads it, and when it is obvious or revealed that the the voice used is not the real individual. Example: Paul Giamatti portraying Theodore Roosevelt in Ken Burns’ “The Roosevelts” and being credited as doing so.

That’s it for the ethical devices. Now for what’s unethical…

Unethical: Movies and TV shows playing fake versions of famous records sung by imitators without revealing this to the audience. This is a pet peeve of mine: apparently it is often cheaper to make a new, fake version of a recording than to pay the money to use the real one. I don’t care if it is played under dialogue or not: most of the time the fake version doesn’t do the real artist or the song justice, and the inferior version misleads the listener.

Unethical: Movies using different actors or singers to dub the voices of stars, regardless of the reasons, and not crediting the off-screen talents. This is true even if the dubbing actors or singers accept compensation not to be credited. Examples: Marni Nixon singing for Audrey Hepburn, Deborah Kerr and Natalie Wood in “My Fair Lady,” “The King and I,” and “West Side Story,”; Paul Frees doing Tony Curtis’ female voice in “Some Like It Hot.”

Unethical: Using a dead actor or singer’s performance in another medium or venue to simulate a live performance before a current audience when the performer never approved or consented to such use, regardless of whether the performer’s estate or family can legally grant permission for such a use. Here, I endorsed laws protecting all dead public figures from undignified future cyber- or hologram manipulation absent their consent.

Unethical: Any fake voice of any individual, no matter how it is created, used to deceive viewers or listeners that the individual spoke words he or she did not, in any context the individual did not intend. Example: a deepfake Anthony Bourdain reading the real chef’s email.

_____________________

Pointer? I lost the reference! I’m sorry! Let me know if you sent me this topic, and I’ll give credit where it’s due.

4 thoughts on “Presenting The Complete Fake Voice Ethics Verdicts

  1. How long before AI deepfake voices are used to portray a political figure’s “could have…might have said moments” to a gullible public?

  2. What about software that simulates a voice actor’s character, but not the actor’s actual voice?

    There’s a site called 15.ai which does scary good text-to-speech conversion using the voices of various cartoon and video game characters. It’s currently down (and tends to be so for long stretches while the owner updates it), but it’s considered one of the best voice simulators out there. Technically, it uses voice samples from real people to learn how to translate written words into speech, but not their “real” voices. So you can type in a phrase and hear it read, passably, in SpongeBob’s voice. I can see the media companies that own these characters suing for copyright infringement if this get’s on their radar, but is it inherently unethical?

    • It may just be me, but those voices didn’t seem close enough to the original characters. The tones would be slightly off, or the tempo wouldn’t be correct. But 15.ai was the best, when it was up. But as for using a voice synthesizer to make lines sound like fictional characters? I think generally it should not be unethical. Fictional characters aren’t people, don’t have livelihoods and reputations that can be ruined by a fake voice line. As long as the fictional character’s fictional lines aren’t being used for nefarious purposes, there should be nothing wrong with sites like 15.ai (when they’re actually up).

      • Who defines nefarious purposes? That makes another sticky wicket as the biggest supporter of free speech by party has reversed in my lifetime. Heirs and corporations that have the right to put out of character words on a character does not mean it doesn’t break something. (ref Luke and Yoda in the Star Wars sequel trilogy)

        If the makers of the faked audio are not affiliated with the characters or clearly marked as not authentically with permission/license or under fair use, it’s just your basic plagiarism. Making a profit on someone else’s property is a no-no.

        It’s just too bad that woke and corporate efforts have lost all creativity and authenticity compared to earlier times and amateur creators. Like there are astoundingly good fan works for every IP but the woke dominated corporations have trouble making anything of worth. But fan works do it for their love instead of the politics and make no pretense of owning anything or profiting. There’s even writing tools to identify plagerized papers and fiction. Unless clearly identified, faking characters is just audio plagarism.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.