Unethical Technology On The Way: Imagine What Breitbart Will Be Able To Do With THIS

The video above shows a still-in-development system called Face2Face (research paper here) created by researchers at Stanford, the Max Planck Institute and the University of Erlangen-Nuremberg. It would allow you to take YouTube video of anyone speaking, and to pair it with a standard webcam  video of someone else emoting while saying something entirely different. Thehe Face2Face system will synthesize a new video showing the originals speaker making the second speaker’s facial movements, including the interior of the mouth, so it looks like the original speaker is saying what the second speaker was.

Tech Crunch reports that the system isn’t quite ready for market yet. Gee, I can hardly wait. This “advance” has the potential of making video just as unreliable and untrustworthy as still photography is now. Web hoaxers, Ted Cruz’s marketing team, unscrupulous political websites like Breitbart and others will have a field day once Face2Face is perfected.

The justification for creating such technology is the same as the rationalizations behind cloning velociraptors in “Jurassic Park”: because we can, and because we can make money with it. Can any good come from Face2Face? It’s late and I’m not at my best, but it seems to me that the end results of having another tool for liars just means more lies, more cynicism, more misinformed people, and less trust.

Isn’t it irresponsible and inherently unethical to invent something like this?

22 thoughts on “Unethical Technology On The Way: Imagine What Breitbart Will Be Able To Do With THIS

  1. Yes, yes it is unethical to give birth to such a monster. Talk about “putting words in someone’s mouth”, literally. I just can’t see any useful purpose…..but maybe it explains all of the outrageous things Trump has been saying. They have been having a high time perfecting the beta version!

    • As always, this is a good general principle, but one with exceptions. Let’s invent a machine that has no function except to destroy the planet. The only ethical use of the device is not to use it. Is that invention unethical?

      • “The only ethical use of the device is not to use it.” Now you’re in the area of the movie, “War Games” and the “WOPR” (“whopper”) war-gaming and war-simulating computer, that in the end, lectured the lucky US and Soviet military brass about the “M.A.D.” futility of all-out nuclear war.

        Perhaps the capability of self-annihilation confirms that the ONLY ethical use of a doomsday device is to USE it – to ensure abortion of a human race that had proved itself (finally!) sufficiently unethical to invent such a thing, therefore having proved itself (as it had been, all along) unworthy of eternal exemption from a doomsday.

  2. There is one: movie FX. A more primitive version was used for Gollum, Guardians of the Galaxy and other CGI characters. Now those animated award presenters won’t have to be prerecorded. Also think the interactive characters in an advanced amusement park. The Jurassic Park reference is relevant for more than one reason.

  3. No. Not unethical. I think we’ve had a similar discussion before. Things are not unethical, regardless of whether or not they have a greater likelihood of being used unethically than other more ethically neutral things.

    Ethics applies to conduct or in this case conduct *using* an inanimate object. I don’t know if Ethics applies to the inanimate object itself.

    For an extreme example: if robot were programmed to walk around punching people as its only function, I’d still say the robot isn’t unethical, rather the individual who programmed said robot to do such an act.

    This technology? I could see it’s uses. Remember the scene in Forrest Gump with an obviously fictionalized meeting between Gump and Kennedy? And how obviously CGI’d Kennedy’s dialogue was?

    Or how about a purely historical rendering where we KNOW a president is on record saying something but there is no video of the same, yet a documentarian or moviemaker wants to capture that moment and has enough other video of the same subject to effectively dub the actual words in? I’m not sure that’s unethical.

    The issue here is the ability to falsify with the intent to malign others. But that is already unethical regardless of the technology available to make that act easier or not.

    • When we’ve had this discussion before, it’s been about the “ick factor,” or technology that could obviously be misused. The H-bomb and other weapons, cloning—telepathy devices in science fiction.

      I don’t see that here. This is the reverse, technology that takes a lot of spinning to imagine even trivial benefits. We already have CGI slowly getting to the point where we can put John Wayne in a contemporary film—that covers the need for another Forrest Gump.

      • “We already have CGI slowly getting to the point where we can put John Wayne in a contemporary film—that covers the need for another Forrest Gump.”

        Isn’t that just saying we already have or are about to have technology, that with just a little more effort will allow for all the exact same unethical misuse we are worried about with the technology in question?

        So wouldn’t this CGI and the technology in question have essentially the same ethical value?

        • CGI meets the standard of the neutral technology with both ethical and unethical applications. This Face2Face is “distort an accurate video to fool people.” Yes—with complete transparency and a disclaimer, that device’s damage would be reduced. But we have eliminated the ability of video to constitute a reliable historical record.

          • “…[W]e have eliminated the ability of video to constitute a reliable historical record.”

            That delights me! I can’t wipe the smile of anticipation off my face. I am just itching for my next chance to bamboozle one or more of my grandchildren while watching TV with them. I have done this before; I even did it with my kids when they were young (at the perfectly gullible age): I waited for the perfect moment, as the kids and I watched some history program that included re-enactments of, say, the Battle of the Alamo, or the 1860s American War of Secession, or Thermopylae with the Spartans and Persians. Then I would interject something like, “Those armies had some especially good cameras and camera operators! It’s really sad that so many of them got killed.” I lost count of how many times that prank has worked. Perpetrating it sets up a precious teaching moment. Grandpa’s power trip.

  4. This is the perfect example of intelligent people walking around with tunnel visioned blinders on ignoring the in-your-face unethical uses of the technology they are developing/enhancing.

    Just because we the people have the right to do and say whatever we want does not make what we do and say right. Any questions?

    Face2Face: Real-time Face Capture and Reenactment of RGB Videos


    “…University of Erlangen-Nuremberg, Max-Planck-Institute for Informatics, Stanford University”

    “…our goal is the online transfer of facial expressions of a source actor captured by an RGB sensor to a target actor. The target sequence can be any monocular video; e.g., legacy video footage downloaded from Youtube with a facial performance. We aim to modify the target video in a photo-realistic fashion, such that it is virtually impossible to notice the manipulations. Faithful photo-realistic facial reenactment is the foundation for a variety of applications; for instance, in video conferencing, the video feed can be adapted to match the face motion of a translator, or face videos can be convincingly dubbed to a foreign language.”

    <…we believe our system will pave the way for many new and exciting applications in the fields of VR/AR, teleconferencing, or on-the-fly dubbing of videos with translated audio."

    "This research is funded by the German Research Foundation (DFG), grant GRK-1773 Heterogeneous Image Systems, the ERC Starting Grant 335545 CapReal, and the Max Planck Center for Visual Computing and Communications (MPC-VCC)…"

    Jack asked, “Can any good come from Face2Face?”

    Since you used the word “any” in that question I have to answer the question with “yes… maybe” it all depends on how one defines “good”; but, the genuine in-your-face if you bother to look unethical possible uses of the technology that are clearly bad will far, Far, FAR, FAR outweigh any possible minuscule comparison good uses.

    Jack asked, “Isn’t it irresponsible and inherently unethical to invent something like this?”

    YES; however, a tunneled visioned research scientist will likely tell you no and then try to justify their answer with all kinds of unethical rationalizations.

    There are a lot of quotes attributed to Einstein, I’m not sure if they are actually all actually his own creation but here are a couple of my favorites…

    “Technological process is like an axe in the hands of a pathological criminal.”

    “It has become appallingly obvious that our technology has exceeded our humanity.”

    “Two things are infinite; the universe and human stupidity and I’m not sure about the universe.”

    …and my absolute favorite…

    “I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”

    Have we already reached this point, if not the majority of the currently being educated generation (6-22 year old) then certainly the next a generation that will shortly follow.

  5. Yes, it is unethical but think about what Mel Brooks could have done with this say in a Hitler speech, in making “The Producers”!

  6. My “on the fly” comment (above) is in need of clarity. Yes, the invention of Face2Face is irresponsible and unethical, but not inherently so. What makes the invention unethical is (as Jack wrote) that it will make “video just as unreliable and untrustworthy as still photography is now.” As Zoltar Speaks! commented, the disvalue of potential unethical uses of the technology clearly outweigh any potential value of ethical uses of the technology. Texagg04’s comment reveals another weakness in my comment. My comment, “It’s invention and existence are a waste of resources that would have been better used elsewhere” refers to the behavior of the inventors and their use of resources, not the inanimate objects involved.

    As for the other big question (of which Face2Face isn’t an example), any invention “that has no function except to destroy the planet” is unethical. There are only two conditions under which I conceive such an invention could be ethical: (1) Our existence and the existence of our planet have some evil effects on the universe of which we’re totally unaware; and (2) Existence of the threat of total destruction somehow, miraculously, results in worldwide ethical behavior. Since we’ve now lived with this threat (or, certainly something close to it) for several decades, and ethical behavior seems to be waning, I don’t perceive the latter exception to be viable. Those inventors of our past with hopes of creating machines so destructive that they would end the prospect of war were mistaken. Those in the future will likely be so as well. As for the first exception, it is purely conjecture upon conjecture and may have no relationship to human ethics whatsoever.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.