No, Dr. Gelman, Just Because You Think Your Toaster Is A Lawyer Doesn’t Mean What You Say To It Is Privileged

Its continues to amaze me whom the New York Times will give a platform to. Take Dr. Nils Gilman (please!), a historian who “works at the intersection of technology and public policy,” whatever that means.

He has written a supposedly learned column for the Times [gift link] claiming that human beings should have something akin to attorney-client privilege when they shoot off their mouths to their chatbots. His cautionary tale:

On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is [lit]because of your cigarettes?”… “Yes,” ChatGPT replied…. Rinderknecht…had previously told the chatbot how “amazing” it had felt to burn a Bible months prior….and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire while a crowd of rich people mocked them behind a gate.

Somehow the bot squealed to federal authorities. Those conversations were considered sufficient evidence of Rinderknecht’s mind, motives and intent to start a fire that, along with GPS data that put him at the scene of the initial blaze, the feds arrested and chargeed him with several criminal counts, including destruction of property by means of fire, alleging that he was responsible for a small blaze that reignited a week later to start the horrific Palisades fire.

To the author, “this disturbing development is a warning for our legal system.” You see, lonely, stupid people are using A.I. chatbots as confidants, therapists and advisers now, and the damn things cannot be trusted. “We urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it A.I. interaction privilege,” he pleads.

I call it ridiculous. The column goes on to argue for the importance of privacy, and that “without assurance of privacy, people self-censor and society loses the benefits of honesty.” Somehow he leaps from this arguable proposition to conclude bots that people think are trustworthy should be treated as if they are:

“People speak increasingly freely to A.I. systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but shaping how users think and feel. A.I. systems can draw users out, just as a good lawyer or therapist does. Many people turn to A.I. precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.”

Yeah, but there’s just one teensy-weensy problem with this line of reasoning, which Dr. Gelman feels requires that “A.I. interaction privilege.” Just because some boob thinks that his Home Depot clerk looks like a lawyer doesn’t mean an attorney-client relationship is formed when he tells the clerk that he embezzled a million bucks from the bank he works for. Or, to be closer to what is being proposed in this galactically stupid opinion piece, just because I think my toaster or an ashtray is a priest doesn’t make it a priest.

The solution to the problem, to the extent that it is a problem is 1) teach people not to be irresponsible with technology and 2) not bend over backwards to protect dumb people from their stupidity. Yes, as the Unabomber so sagely warns us (too bad his method of communication was that letter-bomb thingy), technology is a mixed blessing, and often makes life harder for some of us. (“Life is hard, and it’s harder if you’re stupid.”John Wayne ) Nonetheless, it is not the government’s job, or that of the legal system, to protect the dumbest among us from ourselves.

I bet Dr. Gelman nodded in agreement when his city’s commie mayor-to-be said that there is no problem too large for government to solve. If there is one problem the government shouldn’t try to solve and can’t solve is the proliferation of stupid Americans. The best it can aspire to is not to reward stupidity, and to try protect the non-idiots from the idiots as much as possible.

Criminal defense lawyer Scott Greenfield has proposed another fix for the threat of the robot unauthorized legal/medical/ clerical practitioner: program all Chatbots to warn conversants that what they say to a bot is not confidential and could be used against them in court. Or maybe include a warning label on the things, like the “Do not eat” labels, that point out that these are not human beings but like human beings, cannot be trusted.

“Will this stifle communication with chatbots? You bet it will, and given the quality of AI information on matters of life and death importance, that’s hardly a bad thing,” Greenfield concludes. “The danger isn’t that communications with a chatbot aren’t privileged, but that chatbot’s answers may ruin people’s lives, and even kill people. This is not something that’s so vital to society that it deserves to be encouraged and protected.”

Bingo.

7 thoughts on “No, Dr. Gelman, Just Because You Think Your Toaster Is A Lawyer Doesn’t Mean What You Say To It Is Privileged

  1. I agree. Don’t equate ChatBots to priests or lawyers or spouses. I don’t think, incidentally, that society thinks there is some privilege when talking to a ChatBot. It’s closer to a diary in which someone might write their plans for mass murder. The fact that this diary can HELP you plan the killing doesn’t make it more private. Emails and texts aren’t privileged, either, and society so far has been just fine with that.

    Where ChatBots will make a stronger case is when they are ACTING as lawyers. Or, FFS, priests. We aren’t that far off. Folks are already romantically involved with AI Avatars. I can easily see Chatbot Confessional, sponsored by some offshoot Catholic organization. And “ChatLawyer” is probably already here in some form. I tend to think that if someone makes a sincere religious confession or consultation with a ChatPriest, it probably should be just as protected as a human priest, and the same with lawyer-client privilege? But I’m not sure about my opinion. It’s a brave new world, surely.

  2. Priests and Lawyers are ordained or certified by human authorities. The only thing a ChatBot has is a coder. Society used to look askance at anyoner that was talking to a tree or the empty air, now we validate their insanity. The children are not okay and we need to realize it.

  3. okay, leave it to my lawyer-nerd mind, but “where is the hearsay objection?”

    did the police get a voice recording from the chatbot?

    if not, can a chatbot provide “reliable hearsay” in support of probable cause?

    well, Chatbot, can you?

    Chatbot: “I’m sorry, Jut, I’m afraid I can’t do that.”

    -Jut

    • It is akin to looking at library borrowing history to get circumstantial evidence of means and motive. Web browsing or from Google search history are directly akin as well.

      Without the direct evidence of his GSP recording him being where the fire started, his search history would be irrelevant.

  4. While not on point, parents of a 16 year old have sued Open AI ChatGBT and its owner, Samuel Altman, alleging the AI generator of encouraging, promoting, and actively assisting in their son’s suicide:

    https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147

    Does this lawsuit survive either a motion to dismiss or a motion for summary judgment? Not sure.

    I do not believe this professor is correct. Privileges are based on some form of expectation of privacy: the attorney-client privilege; penatent-confessor/spiritual advisor privilege; doctor-patient privilege; spousal privilege; certain confidential informant privileges. There is no reasonable expectation of privacy with online activity, no matter how much the user . . . erm ,. . . uses incognito or privacy apps in search engines and websites. Recently, in Texas a Democrat candidate for the House stands accused of friending and following OnlyFan and pornstar accounts on Instagram and Twitter.

    The Unabomber may have been right just as sic-fi authors/writers have been writing about the explosion of technology because we as a society have not caught up with the risks and/or implications of its uses and dangers.

    jvb

Leave a reply to jdkazoo123 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.