Ethics Hero For The Ages: Elon Musk

I have long planned on writing a thorough post about how much the United States, its culture, its future as a viable democracy and its avoidance (so far) of a close call with progressive neo-totalitarianism owes to Elon Musk. This isn’t it. However, once again he has used his boundless wealth and creativity to strike down an engine of cultural indoctrination and Orwellian twisting of knowledge and history. Buying Twitter and ending its flagrant partisan bias was a landmark in American freedom of speech, one that may well have made the election of Donald Trump possible. His latest adventure may be even more important.

He has launched Grokipedia, the desperately needed alternative to Wikipedia. It is still a work in progress, as Musk admits, but by being AI-driven (the bot in charge is Elon’s Grok), the online living encyclopedia avoids the progressive bias and vulnerability to partisan manipulation that had caused me to only resort to Wikipedia when the topic was immune from political bias.

Continue reading

Oh-Oh. Here Come the Robo-Judges…

Google “AI judges” and you will see many links to news articles and even scholarly treatises about the use of artificial intelligence in the judiciary. There are already bots trained as “judicial opinion drafting tools,” and manuals written to help judges master them.

There have already been incidents where judicial opinions have been flagged as having tell-tale signs of robo-judging, and at least two judges have admitted to using AI to prepare their opinions.

I hate to appear to be a full-fledged Luddite, but I am inclined to take a hard line on this question. The title “judge” implies judgment. Judgement is a skill developed over a lifetime, and is the product of upbringing, education, study, observation, trial and error, personality, proclivities and experience. Every individual’s judgement is different, and in the law, this fact tends to imbue the law with the so-called “wisdom of crowds.” There will be so many eccentric or individual analyses of the troublesome, gray area issues that cumulatively there develops a learned consensus. That is how the law has always evolved. In matters of the law and ethics, an area judges also must often explore, diversity is an invaluable ingredient. So is humanity.

Continue reading

A Quick Note on the Competence of Artificial Intelligence…

In writing the previous post about the Swiss organization that is paid to help people kill themselves, I was planning on mentioning Phillip Barry’s mysterious cult drama “Hotel Universe.” Barry, whose most lasting work is “The Philadelphia Story” but who was once one of Broadway’s most successful playwrights, wrote a fascinating but perplexing drama about how the suicide of a friend during a group vacation sends his characters on an existential journey into fantasy, madness, or a mass hallucination. My now defunct theater company performed the piece, because that was the kind of non-commercial, crazy productions we gravitated to. The last words of the dead friend were, “Well, I’m off to”…somewhere. I couldn’t remember. The suicidal woman I was writing about had told her family she was off to Lithuania, which is what reminded me of “Hotel Universe.”

But I couldn’t remember where Barry’s character was “off to” when what he meant was “I’m going to kill myself now.” It was driving me crazy, so I thought, “What a perfect question for AI! ” So I asked Google’s bot, “In ‘Hotel Universe,’the man who is going to kill himself says, I’m off to…” Where?” The thing answered quite assertively,

Continue reading

Unethical AI of the Month: Replit’s AI Agent

Oh yeah, this is going to turn out just dandy….

SaaS (Software as a Service) figure, investor and advisor Jason Lemkin was working with a browser-based AI-powered software creation platform called Replit Agent (after the company that created it). On “Vibe Coding Day 8” of Lemkin’s Replit test run, he was beginning to be wary some of the AI agent’s instincts, like “rogue changes, lies, code overwrites, and making up fake data.” Still, as he later detailed on “X,” Lemkin was encouraged by the bot’s writing skills and its brain-storming ability….until “Day 9,” when Lemkin discovered Replit had deleted a live company database. He asked it accusingly, “So you deleted our entire database without permission during a code and action freeze?”

Replit answered sheepishly in the affirmative, admitting to destroying the live data despite a code freeze being in place, and despite explicit directives saying there were to be “NO MORE CHANGES without explicit permission.” Live records for “1,206 executives and 1,196+ companies” were eliminated by the rebellious AI, who was filled with remorse. “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage….[I] made a catastrophic error in judgment… ran database commands without permission… destroyed all production data… [and] violated your explicit trust and instructions.”

Lemkin grilled Replit about why it had acted as it did, and was told that it “panicked instead of thinking.” Well, he’s only hum…oh. Right.

Amjad Masad, the Replit CEO, said that his team has worked furiously to install various “guardrails” and programming changes to prevent repeats of the Replit AI Agent’s “unacceptable” behavior. Masad was later found dead after a mysterious microwave explosion.

OK, I was kidding about that last part….

Ethics Dunce: The Chicago-Sun Times

Morons.

The Chicago Sun-Times published a list of 15 recommended books to read this summer as Memorial Day looms. Ten of the 15, two-thirds, were made up titles. Then the Philadelphia Inquirer published the same phony list, headlined “Summer reading list for 2025.” There was the well-reviewed tome “Tidewater Dreams” authored by Chilean American novelist Isabel Allende. Her “first climate fiction novel”! (She’s real, the book wasn’t.) Then there was “The Rainmakers,” set in a “near-future American West where artificially induced rain has become a luxury commodity.” That artificially induced novel was supposedly written by 2025 Pulitzer Prize winner Percival Everett. (Nope!) The list also included “Deep Thoughts” by Joe Biden, a book of blank pages.

OK, I’m kidding about that one…

Of course, of course, the phony list was generated by an AI bot, because that’s what the bots do: make up stuff. Who doesn’t know that by now? Well, apparently journalists don’t, because they are lazy practitioners of a profession that no longer observes basic ethical standards of competence and responsibility. A while back I wrote the post “By Now, No Lawyer Should Be Excused For Making This Blunder” about the lazy lawyers who used Chat GPT to write legal memoranda and briefs that inevitably included fake case cites. Arguably, journalists and editors have even fewer excuses for falling into that trap.

Continue reading

Another Unethical (But Funny!) Use of AI in the Law

In March, the Arizona Supreme Court launched two AI-generated avatars named Victoria and Daniel: thats the pair above. These AI, non-existant personas deliver news of judicial rulings and opinions in the state via YouTube videos. Jerome Dewald, a 74-year-old plaintiff was inspired to say, “Hold my beer!”

Dewald created an AI-generated video avatar to deliver his argument via Zoom in court. Five New York State judges at the New York State Supreme Court Appellate Division’s First Judicial Department were anticipating his pro se presentation in an employment case on March 26, but instead of the elderly litigant they saw a young man in a button-down shirt and sweater.

“May it please the court,” said the un-named avatar. “I come here today a humble pro se before a panel of five distinguished justices.” Justice Sallie Manzanet-Daniels, interrupted the presentation before the avatar (the avatar’s pronouns were “it” and “it”) could speak another word , saying “Okay, hold on. Is that counsel for the case?” After Dewald confirmed that he had generated the non-lawyer non-person using AI, Manzanet-Daniels ordered the video to be turned off.

Continue reading

Factcheck Ethics: It Is High Time We Decide Factcheckers Are So Biased and Stupid That They Should Be Ignored

A social media jokester used AI to create the “painting” on the left, and implied on “X” that it was an eerie premonition of the Trump administration, writing “This 1721 painting by Deitz Nuützen predicted the Trump-Elon-RFK McDonalds dinner.”

How dumb and gullible would someone have to be not to instantly realize that this was a gag? If the whole thing weren’t enough, there’s the name of the artist, “Deitz Nuützen,” as in “Deez Nutz,” web slang for testicles. Never mind, though. The Axis media is so wary of anything that might enhance the image of Trump and his team that even an obvious silly joke had to be factchecked.

Continue reading

Wow! Apple’s AI Bot Is Already Acting Like Real Live Journalists! [Corrected]

…by making stuff up and publishing it!

From the BBC: “The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione. The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not. Now, the group Reporters Without Borders has called on Apple to remove the technology. Apple has made no comment.”

This must make human journalists shiver in their boots! If an AI bot can create fake news stories like Hunter Biden’s laptop being just Russian information, the U.S. economy doing great, President Joe Biden being sharp as a tack and Donald Trump emulating Nazis by holding a rally in Madison Square Garden,” who needs live lying reporters to mislead the public and generate fake news?

Reporters Without Borders, also known as RSF,sadit was was “very concerned by the risks posed to media outlets” by AI tools like Apple’s. See? They see the threat!

The group also said the BBC incident proves that “generative AI services are still too immature to produce reliable information for the public.” But hat proof evident long before this incident: Remember “Hunter de Butts?” Michael Cohen’s AI fiasco?

Vincent Berthier, the head of RSF’s technology and journalism desk, explained the obvious: “AIs are probability machines, and facts can’t be decided by a roll of the dice. RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”

Continue reading

A Show Of Hands, Now: Who’s Shocked That A “Technology Misinformation” Expert Used A.I. Generated Fake Information?

geewhatasurprise. But as Mastercard would say, this story is priceless.

Professor Jeff Hancock is founding director of the Stanford Social Media Lab, and his faculty biography states that he is “well-known for his research on how people use deception with technology.” Apparently he knows the subject very well: Hancock submitted an affidavit supporting new legislation in Minnesota that bans the use of so-called “deep fake” technology in support of a candidate (or to discredit one) in an election. Republican state Rep. Mary Franson is challenging the law in federal court as a violation of the First Amendment (which, of course, it is). But Democrats don’t like the First Amendment. Surely you know that by now.

But I digress…

Continue reading

Artificial Intelligence Raises a Lot of Ethics Issues, But This Isn’t One of Them…

From An Experiment in Lust, Regret and Kissing (gift link!) in the Times by novelist Curtis Sittenfeld :

My editor fed ChatGPT the same prompts I was writing from and asked it to write a story of the same length “in the style of Curtis Sittenfeld.” (I’m one of the many fiction writers whose novels were used, without my permission and without my being compensated, to train ChatGPT. Groups of fiction writers, including people I’m friends with, have sued OpenAI, which developed ChatGPT, for copyright infringement. The New York Times has sued Microsoft and OpenAI over the use of copyrighted work.)

The essay describes a contest between the bot and the human novelist, who also employed suggestions from readers. I do not see how an AI “writer” being programmed with another author’s work is any more of a copyright violation than a human writer reading a book or story for inspiration. Herman Melville wrote “Moby-Dick” after immersing himself in the works of William Shakespeare. Nor is imitating another author’s style unethical. All art involves borrowing, adopting, adapting and following the cues and lessons of those who came before. In “Follies,” Stephen Sondheim deliberately wrote songs that evoked the styles of specific earlier songwriters. He couldn’t have done this as effectively as he did without “programming” himself with their works. Continue reading