The A.I. Ethics Problem in News Reporting

Guest post by Matthew B.

JM Introduction: This excellent post arrived on yesterday’s open forum, and thus was immediately eligible for guest column status. It is especially timely, both because of this story from the legal ethics jungle and this more alarming one:

The top United States Army commander in South Korea revealed to reporters this week that he has been using a chatbot to help with decisions that affect thousands of U.S. soldiers. Major General William “Hank” Taylor told the media in Washington, D.C., that he is using AI to sharpen decision-making, but not on the battlefield. The major general — the fourth-highest officer rank in the U.S. Army — is using the chatbot to assist him in daily work and command of soldiers.

Speaking to reporters at a media roundtable at the annual Association of the United States Army conference, Taylor reportedly said “Chat and I” have become “really close lately.”

Great. What could go wrong? Now here’s Matthew…

***

One of the problems with AI is how often it is confidently wrong. This manifests itself all over the place. One of the most troubling is in the news industry. The news industry under tremendous financial pressure, and the appeal of moving towards AI generated content opens them up to completely BS stories spreading.

There are several great recent examples.

Continue reading

Stop Making Me Defend Tilly Norwood!

Hollywood actors are freaking out over fake actress “Tilly Norwood.” That’s already a plus to the AI-generated performer’s credit: Hollywood actors deserve to be freaked out as often possible (within the boundaries of law and ethics, of course). It gives them something to scream about other than how the President of the United States is a fascist, or how as more unborn babies should be killed. And cases like this one, where their freaking out reveals just how hypocritical and intellectually shallow they are, it’s a public service: NOW do you understand why you shouldn’t pay attention to these one-trick millionaires?

Tilly Norwood, in case you never watch E! or read Variety, is an AI-generated fake actress with about 40,000 Instagram followers who don’t have a life. Tilly was created by Xicoia, the AI division of the production company Particle6, from the rib of an AI-created actor. OK, I’m kidding about that.

Eline Van der Velden, the Dutch producer who founded Particle6, claims to be seeking an agent to represent Norwood to place her in real films, ads and TV shows, unlike the fake, AI created scenes in her videos.

Continue reading

As If Any More Proof Was Needed, Trump 1.0 Nemesis Jim Acosta Reveals Himself Beyond All Question To Be An Unethical Hack

You see, no decent, ethical journalist would even think of doing this. No intelligent journalist—or pest removal professional—would either. Yet this is the guy CNN sicced on President Trump and his press secretaries in his first term. This irredeemable partisan hack became a broadcast news star with neither the common sense, acumen, professional skills or decency to justify such status, which he is making a living off now.

This is CNN. This is Jim Acosta. This is the state of American journalism.

Former CNN correspondent Jim Acosta released the video of him interviewing an AI-generated version of Joaquin Oliver, who is dead. He’s one of the 17 victims of the 2018 mass shooting at Marjory Stoneman Douglas High School in Parkland, Florida, the tragedy that also inflicted David Hogg on the world, as if the shooting itself wasn’t horrible enough.

The avatar was animated from a photograph of the late 17-year-old who appears wearing a beanie while speaking in a monotone digital voice. Acosta begins by asking, “What happened to you?” to which the AI version of Oliver responds, “I was taken from this world too soon due to gun violence while at school. It’s important to talk about these issues so we can create a safer future for everyone.”

Let’s pass on the conduct of the parents in creating the creepy thing, which is right out of an episode of “Black Mirror.” The topic is journalism ethics. Today’s reporters are so estranged from the concepts of honesty, respect, objectivity, responsibility and trustworthiness that no ethics alarm pings when someone says, “Hey Jim! Apparently there’s an AI version f one of those dead Parkland kids. Why don’t you interview him? Maybe he’ll say something nasty about Trump!”

True, Acosta is pretty much the bottom of the barrel in the profession that is already the bottom of the profession barrel, but still, it wasn’t that long ago that a stunt like this would be considered outrageous if attempted by a shock jock like The Greaseman or Howard Stern. I would say that this is the canary dying in the mine, except that then Chris Cuomo or Don Lemon might interview an AI version of the canary.

[Even WordPress is disgusted; it won’t let me download a photo of this asshole.]

Ethics Quiz: The Google AI Olympics Commercial

Google pulled that ad after a wave of criticism on social media.

Is the ad encouraging children to use AI instead of writing their own messages and letters? Is it an invitation to cheat in school? Does it suggest that robots are better at expressing genuine human feelings than humans are? Is having someone, or something, write your fan letters to a personal hero a cop-out? A lie?

Is the commercial “Ick!”, unethical, or just ominous?

Your Ethics Alarms Ethics Quiz of the Day is…

Is that Google AI ad irresponsible, corrupting—unethical? Did an ethics alarm fail to sound that should have?

I See That Ann Althouse Has Recognized the Increasingly Totalitarian Orientation of Progressives These Days….

The betting is that te retired Madison, Wis. law professor and longtime bloggress will still vote for Biden and the Democrats—like Bill Maher, Ann talks a good neutrality game, but always seems to come home again—but still, her observations are frequently spot-on.

This morning she notes that “the top-rated comment — by a lot — at “A.I. Is Getting Better Fast. Can You Tell What’s Real Now?” is..

“Passing AI images off as real ones for the sake of commercial or political gain should be prosecuted as fraud.The severity of the penalties should match the level of risk that disseminating these images poses to our society; i.e., they should be extreme.”

Ann adds, “How terribly punitive and repressive, and yet, isn’t it what you’ve come to expect from the segment of America that reads the New York Times?Notice the aggression mixed with passivity. The comment-writer doesn’t want to face the challenge of becoming more perceptive and skeptical dealing with the onslaught of A.I. images. They want the government to do the dirty work and do it good and hard.”

Continue reading

Still More Law and Ethics Matters

Boy, the laws and ethics intersection has been almost constantly in the news lately, led by the Fani Willis controversy in Georgia, which apparently will turn on whether the judge believes the justly beleaguered Fulton County DA really paid half of her paramour and co-Trump prosecutor’s expenses on various platonic < cough> trips and cruises in cash, though there’s no record of such payments. Willis’s father even took the stand to explain that keeping huge amounts of cash on hand is “a black thing.” I did not know that!

As Alice would say, “Curiouser and curiouser!” Then we have much ferment in the legal world over whether the New York County Supreme Court’s order for Donald Trump to pay an unprecedented $355 million for inflating asset values in statements of his financial condition submitted to lenders and insurers was just, cruel and unusual punishment, a bill of attainder, or self-evident partisan lawfare. Gov. Kathy Hochul didn’t help matters by trying to justify the award by saying that Trump is special (wink,wink) and we all know what that means when coming from a Democrat. I confess, I don’t know the New York law involved well enough to weigh in on this one, but the verdict certainly adds to the weight of evidence that there is a full-on press to use the courts to crush Trump before he can crush Joe Biden.

There were two non-Trump law and ethics stories recently worth pondering.

Continue reading

“It Wasn’t Our Fault! That Bad Robot Did It!”

Hey, Canada Air! Can you say, “accountability?” How about “responsibility”? Sure you can.

Jake Moffat needed to fly from Vancouver to Toronto to deal with the death of his grandmother. Before he bought the tickets for his flights, he checked to se whether Air Canada had a bereavement policy, and the company’s website AI assistant told him he was in luck (after telling him it was sorry for his loss, of course.) Those little mechanical devils are so lifelike!

The virtual employee explained that if he purchased a regular priced ticket, he would have up to 90 days to claim the bereavement discount. Its exact words were:”If you need to travel immediately or have already traveled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.” So Moffatt booked a one-way ticket to Toronto to attend the funeral, and after the family’s activities a full-price passage back to Vancouver. Somewhere along the line he also spoke to a human being who is an Air Canada representative—at least she claimed to be a human being— confirmed that Air Canada had a bereavement discount. He felt secure, between the facts he had obtained from the helpful bot and the non-bot, that he would eventually pay only $380 for the round trip after he got the substantial refund on the $1600 non-bereavement tickets he had purchased.

After Granny was safely sent to her reward, Jake submitted documentation for the refund. Surprise! Air Canada doesn’t have a reimbursement policy for bereavement flights. You either buy the discounted tickets to begin with, or you pay the regular fare. The chatbot invented the discount policy, just like these things make up court cases. A small claims adjudicator in British Columbia then enters the story, because the annoyed and grieving traveler sought the promised discount from the airline.

Continue reading

Fourth Of July Week Open Forum!

“We’re Number One! We’re Number One!”

Well, to be completely accurate, we’re all “[1]” right now for some reason. The whole blog, back to the beginning, now shows that as the screen name of every commenter, and my name is either missing entirely as author or, in some cases, “[1]” as well. I was first alerted around 5 am by Diego Garcia, and quickly contacted WordPress via an email to their “Happiness Engineers” (yes, they really call themselves that. I got a quick response from WP’s AI creature, who told me that I obviously had my settings wrong and gave me a dizzying sequence of things to click on buried several lawyers deep in the system.

“Oh no you don’t!” I replied. Okay, what I actually wrote back was “Bullshit. I haven’t changed any settings, and you’re not going to lay this off on me. You caused the problem, the problem is yours, and you need to fix it. I am not a software engineer, and I don’t work for WordPress or robots. This is WordPress’s responsibility, and I expect WordPress to do it.”

Then I went back to bed. I was welcomed, upon awakening, to this from the modestly named “Deity,” my Happiness Engineer, who swears he is a Real Boy:

“I appreciate your patience and apologize for the inconvenience you’ve been experiencing. Based on your description, it indeed seems like this issue is related to a known bug that’s currently affecting WordPress blogs.
I just wanted to reassure you that our top-notch technical team is actively working on resolving this issue as swiftly as possible. However, I can understand the importance of having this issue mitigated in the interim period.
In the meantime, as a workaround, you can use the following CSS code to overcome the problem: /* Make comment authors display properly*/.comment-meta .comment-author .fn { text-indent:0; }.comment-meta .comment-author .fn:after { display:none; }

“Please be advised that this is a temporary solution until we implement a more permanent fix. Again, thank you very much for your understanding on the matter and I’m extremely grateful for your patience. We value your trust in WordPress and promise to keep you informed with updates as they happen.”

So the AI was spitting out bullshit, as usual, just as I surmised! Good to know.

Let’s not allow this to spoil the open forum. Please begin your entries today with your Ethics Alarms name.

But you’re all #[1] to me!