Its continues to amaze me whom the New York Times will give a platform to. Take Dr. Nils Gilman (please!), a historian who “works at the intersection of technology and public policy,” whatever that means.
He has written a supposedly learned column for the Times [gift link] claiming that human beings should have something akin to attorney-client privilege when they shoot off their mouths to their chatbots. His cautionary tale:
On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is [lit]because of your cigarettes?”… “Yes,” ChatGPT replied…. Rinderknecht…had previously told the chatbot how “amazing” it had felt to burn a Bible months prior….and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire while a crowd of rich people mocked them behind a gate.
Somehow the bot squealed to federal authorities. Those conversations were considered sufficient evidence of Rinderknecht’s mind, motives and intent to start a fire that, along with GPS data that put him at the scene of the initial blaze, the feds arrested and chargeed him with several criminal counts, including destruction of property by means of fire, alleging that he was responsible for a small blaze that reignited a week later to start the horrific Palisades fire.
To the author, “this disturbing development is a warning for our legal system.” You see, lonely, stupid people are using A.I. chatbots as confidants, therapists and advisers now, and the damn things cannot be trusted. “We urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it A.I. interaction privilege,” he pleads.








