A Show Of Hands, Now: Who’s Shocked That A “Technology Misinformation” Expert Used A.I. Generated Fake Information?

geewhatasurprise. But as Mastercard would say, this story is priceless.

Professor Jeff Hancock is founding director of the Stanford Social Media Lab, and his faculty biography states that he is “well-known for his research on how people use deception with technology.” Apparently he knows the subject very well: Hancock submitted an affidavit supporting new legislation in Minnesota that bans the use of so-called “deep fake” technology in support of a candidate (or to discredit one) in an election. Republican state Rep. Mary Franson is challenging the law in federal court as a violation of the First Amendment (which, of course, it is). But Democrats don’t like the First Amendment. Surely you know that by now.

But I digress…

Hancock is so skilled at technologically misinformation that his expert declaration in support of the deep fake law cited numerous academic works, that don’t exist. If ChatGPT were Zorro, this would be the sign it would leave on its victims. A.I generative software makes stuff up. But you know that, right? I know that. Why didn’t this expert know that? (Or did he?)

The declaration cites a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” for example, and says that it was published in the Journal of Information Technology & Politics in 2023. That journal shows no such paper; and academic databases have no record of it existing.

“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” attorneys for the plaintiffs write. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question.”

Ya think? Volokh Conspiracy founder Professor Eugene Volokh found that another one of Hancock’s citations, to a study titled “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” also doesn’t exist. It doesn’t matter whether the fake citations were inserted by Hancock or one of his assistants. His declaration concludes with the sworn “I declare under penalty of perjury that everything I have stated in this document is true and correct.”

And that’s the metaphorical ballgame. Hancock is entirely responsible for submitting false information, and his credibility as an expert witness should be, and probably will be, over for good. He should also be fired: submitting false information in an official proceeding using his status as a Stanford professor as a credential is bad, but doing so when his area of expertise is technology-aided misinformation is unforgivable. It isn’t as though the problem of AI bots “hallucinating” hasn’t been well-publicized, and even if it hadn’t been, this is his field.

What an idiot.

2 thoughts on “A Show Of Hands, Now: Who’s Shocked That A “Technology Misinformation” Expert Used A.I. Generated Fake Information?

  1. It may not be logically necessary that those who would impose censorship to combat “misinformation” are in fact out to peddle misinformation of their own. It may not always be the case that they intend to use censorship to keep others from pointing out their deception. It just so happens it turns into that kind of scenario so reliably, and so quickly, that it should probably be regarded as akin to a law of nature at this point.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.