Unethical AI Use of the Month

In Great Britain, an A.I. generated image that appeared to show major damage to Carlisle Bridge in Lancaster prompted authorities to halt trains following a minor earthquake. The tremor was felt across Lancashire and the southern Lake District. After the image appeared on-line, Network Rail ended rail service across the bridge until safety inspections had been completed. The delay inconvenienced commuters and wasted public funds. Here is the bridge and the bot-built fake version:

As far as we know a human being was behind the hoax, not a mischievous bot. But A.I. is almost certainly going to challenge Robert Heinlein’s famous declaration that “There are no dangerous weapons; there are only dangerous men,” in addition to the fact that there are also a lot of dangerous women out there too.

ChatGPT has been accused of encouraging people to commit suicide, for example, and Professor Jonathan Turley wrote that ChatGPT defamed him for reasons yet to be determined.

Continue reading

No, Dr. Gelman, Just Because You Think Your Toaster Is A Lawyer Doesn’t Mean What You Say To It Is Privileged

Its continues to amaze me whom the New York Times will give a platform to. Take Dr. Nils Gilman (please!), a historian who “works at the intersection of technology and public policy,” whatever that means.

He has written a supposedly learned column for the Times [gift link] claiming that human beings should have something akin to attorney-client privilege when they shoot off their mouths to their chatbots. His cautionary tale:

On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is [lit]because of your cigarettes?”… “Yes,” ChatGPT replied…. Rinderknecht…had previously told the chatbot how “amazing” it had felt to burn a Bible months prior….and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire while a crowd of rich people mocked them behind a gate.

Somehow the bot squealed to federal authorities. Those conversations were considered sufficient evidence of Rinderknecht’s mind, motives and intent to start a fire that, along with GPS data that put him at the scene of the initial blaze, the feds arrested and chargeed him with several criminal counts, including destruction of property by means of fire, alleging that he was responsible for a small blaze that reignited a week later to start the horrific Palisades fire.

To the author, “this disturbing development is a warning for our legal system.” You see, lonely, stupid people are using A.I. chatbots as confidants, therapists and advisers now, and the damn things cannot be trusted. “We urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it A.I. interaction privilege,” he pleads.

Continue reading

Incompetent Elected Official of the Week: Porto Alegre, Brazil City Councilman Ramiro Rosário

A city in southern Brazil just enacted the country’s first legislation entirely written by AI bot ChatGPT. Normally the misadventures of a Brazilian local pol wouldn’t turn up on the EA radar, but you know—you know—that this story’s eqivilent is coming soon to our shores, if it isn’t here already

The Associated Press reports that Porto Alegre city councilman Ramiro Rosário admitted to having ChatGPT to write a proposed law aimed at preventing the city from forcing locals to pay for replacing stolen water consumption meters. He didn’t make a single change to the AI generated bill, and didn’t even tell the city council that he didn’t write it. “If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosarío told the AP. “It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence.”

It’s unfair to let the public know that they are being governed by machines, or that their elected officials are too lazy or dumb to compose their own bills. Got it.

Porto Alegre’s council president Hamilton Sossmeier extolled the new law on social media and was embarrassed when its true author was revealed. He then called letting bots write legislation a “dangerous precedent.” Ya think? Massachusetts state senator Barry Finegold says that he has used AI to draft bills, but that he wants “work that is ChatGPT generated to be watermarked….I’m in favor of people using ChatGPT to write bills as long as it’s clear.” I think he means “clear that a bot was involved.” It’s ambiguous language like Barry’s sentence that makes it seem like ChatGPT is an improvement over human public servants.

These AI bots continue to make stuff up, cite imaginary sources, and lie…you know, just like real politicians. For his part, Rosario sees nothing wrong with letting a bot do the work he was elected to do. “All the tools we have developed as a civilization can be used for evil and good,” he told the AP. “That’s why we have to show how it can be used for good.”

Secretly employing a machine to do your work and not disclosing that fact is called “cheating.” Somebody explain to the councilman that cheating is not “good.”

Ick, Unethical, or Illegal? The Fake Scarlet Johanssen Problem

This is one of those relatively rare emerging ethics issues that I’m not foolhardy enough to reach conclusions about right away, because ethics itself is in a state of flux, as is the related law. All I’m going to do now is begin pointing out the problems that are going to have to be solved eventually…or not.

Of course, the problem is technology. As devotees of the uneven Netflix series “Black Mirror” know well, technology opens up as many ethically disturbing unanticipated (or intentional) consequences as it does societal enhancements and benefits. Now we are all facing a really creepy one: the artificial intelligence-driven virtual friend. Or companion. Or lover. Or enemy.

This has been brought into special focus because of an emerging legal controversy. OpenAI, the creators of ChatGPT, debuted a seductive version of the voice assistant last week that sounds suspiciously like actress Scarlett Johansson. What a coinkydink! The voice, dubbed “Sky” evoked the A.I. assistant with whom the lonely divorcé Theodore Twombly (Joaquin Phoenix) falls in love with in the 2013 Spike Jonze movie, “Her,” and that voice was performed by…Scarlett Johansson.

Continue reading

Comment of the Day: “’Ick or Ethics’ Ethics Quiz: The Robot Collaborator”

Here’s a fascinating Comment of the Day by John Paul, explaining his own experiences with ChatGpt relating to yesterday’s post, “’Ick or Ethics’ Ethics Quiz: The Robot Collaborator”:

***

Well if its a competition, and against the rules, I think its pretty easy to say yes its unethical.

However, to help out with just some simple problems, I see using an AI program as no different than asking an editor to go over your book. As someone who has messed around with AI on this particular level (mostly for help with grammar and syntax issues), I have concluded that its contributions are dubious at best, at least as far as the technology has advanced so far.

Consider the following: Here are two paragraphs I wrote for my book last night:

“Kesi stared at the back of the door for a long time. At some point, she lifted her hand to gingerly touch the spot that was starting to numb across her check. Its bite stung upon contact with her sweaty fingers and she reflexively drew it away, just to carefully guide it back again. For a brief moment she played this game of back and forth much like the younglings who would kick the ball in the yard, until she finally felt comfortable with feeling of leaving her hand to rest upon her face. When it finally found its place, the realization of what had just happened hit her just as quickly and suddenly as if Eliza slapped her.”

“Not once, not twice, but Eliza slapped her three times with enough force to send tears down her face. In the moment she might have been too confused to see what was going, but now she was forced to grapple with the weight of the truth that was settling in her chest. (Yes, I realize this isn’t the greatest prose, but it was 2am and I was tired).”

Here’s what ChatGPT suggested I do with those sections when correcting for issues:

Continue reading