The A.I. Ethics Problem in News Reporting

Guest post by Matthew B.

JM Introduction: This excellent post arrived on yesterday’s open forum, and thus was immediately eligible for guest column status. It is especially timely, both because of this story from the legal ethics jungle and this more alarming one:

The top United States Army commander in South Korea revealed to reporters this week that he has been using a chatbot to help with decisions that affect thousands of U.S. soldiers. Major General William “Hank” Taylor told the media in Washington, D.C., that he is using AI to sharpen decision-making, but not on the battlefield. The major general — the fourth-highest officer rank in the U.S. Army — is using the chatbot to assist him in daily work and command of soldiers.

Speaking to reporters at a media roundtable at the annual Association of the United States Army conference, Taylor reportedly said “Chat and I” have become “really close lately.”

Great. What could go wrong? Now here’s Matthew…

***

One of the problems with AI is how often it is confidently wrong. This manifests itself all over the place. One of the most troubling is in the news industry. The news industry under tremendous financial pressure, and the appeal of moving towards AI generated content opens them up to completely BS stories spreading.

There are several great recent examples.

One is the widespread distribution of a claimed cause of the Iberian Peninsula power outage last spring. Suddenly pretty much all news agencies ran with a story about weather causing earth oscillations that spread to the power system. That one is embarrassing and points out that none of the reporters who ran with the story and filed stories recall basic 7th grade science. As an aside, there is also politics-based misinformation regarding the cause. It is accurate to say that a very badly implemented transition to wind and solar was a huge contributing factor. Anti-renewable people were saying it was caused by the transition to renewable power; pro-renewable people were saying it had nothing to do with renewable power. Reality was actually somewhere in the middle, but the government played politics and dismissed any claim of the impact of renewables as AI misinformation.

Another example not long after ws the Air Inida 171 crash. AI took the actual events of the LATAM flight 800 where an accidental push of the seat controls by a flight attendant in flight while giving the pilot food pushed the pilot into the control yoke and created a sudden altitude drop, and using that to create a very realistic looking India Accident Investigation Board report claiming that is what happened in Air India 171. AI lacks any actual intelligence and runs with an idea in full vigor and effort. It created a very detailed forgery, using Indian lingo English in the IAIB official format. It fooled many, including even professionals in the industry. Quite a few pilots running social media accounts had to apologize for being duped, getting caught up in the fervor and not stopping to realize that a reliable report can’t be created that fast.

Now I’m seeing complete AI-generated news websites. They’re making their way to the Google news feed. The first one I noticed was in the renewable energy space (where I work). They’re absolutely 100% AI slop, and making my task of being informed of trends in my industry harder as they’re replacing good reporting. This article is a great example, and the entire website is bullshit. I suspect the reporter is a fake AI generated persona. No LinkedIn bio, which is standard for reporters. But I bet such pieces fool laypersons. I suggest looking at the picture in the link, and seeing the ridiculous picture hallucinated by AI. For comparison, here is the actual turbine.

Now yesterday I saw two different articles claiming that the full retirement age for social security will soon be raises to 68. It is a complete hallucination, with nothing based in reality. Both appeared in Google’s news feed. The best I can figure is that Congress is in discussion about relabeling the terms used in the regulations, because many are confused what they really mean. There is zero serious discussion right now about adjusting the benefits, just relabeling what 62, 67 and 70 are called in relation to the benefits. (For example, 62 being re- labeled as “minimum benefit age” and 70 as “maximum benefit age”.) Any discussion about actual benefits changes are being kicked down the road. Yet these two stories (here and here) confidently report that the start of full Social Security benefits will be switched to 68 from 67 imminently. Both appear to be freshly created AI websites, and have loads of links to filler content.

I don’t have great hope at this serious problem being addressed, as, frankly, many in the news industry are not sufficiently bright or responsible. Wokeness is valued over all other attributes, and it attracts the less than competent. The news industry was already on a downward trend, and this only makes it worse.

7 thoughts on “The A.I. Ethics Problem in News Reporting

  1. I looked at the first linked “This article” and it gave me a chuckle. The top “trending” story on the sight was a thing about using skyscrapers as “gravity batteries” that can somehow generate limitless power. The third “trending” story was about super-efficient solar panels capturing the light inside your house to recover all the power those lights use and “power everything” as the headline suggests.

    Both of these are 100% perpetual-motion-machine-level bullshit, but I’m sure some people would believe it.

    …and as we already know, an AI will definitely believe it.

    –Dwayne

  2. I live in the middle of Silicon Valley which, by all rights, should now be called AI fantasy land. Recently, I met a young woman who is employed by one of the well-known AI firms and she offered a very simple example of why AI is not reliable. If you ask 100 people a relatively simple fact-based question that requires an 8th grade education, over 50% will get it wrong–whether it’s a date, an historical figure, etc. Any of the popular “man on the street” interviews are adequate evidence that she is right. Consequently, that’s the expected reliability of anything generated by AI. I guess we’re just doomed.

  3. Your site’s commenter screening algorithms have beaten me, but I want to say that I followed the links to the “alternative energy” magazine for laughs, and noted a couple of interesting items.

    First, there’s a tag at the bottom: ” This article is based on verified sources and supported by editorial technologies.” I take it that “editorial technologies” is another name for LLM-generated slop.

    Second, the story on indoor solar panels doesn’t claim that they can power “everything”, but “everything that currently runs on batteries”. That, in fact, is plausible… but what do we run on batteries? Calculators? They’ve been running on indoor solar panels for decades. TV remote controls? Maybe… I don’t use TV. The actual energy consumed by battery powered devices is a tiny fraction of total energy consumption. And, if you read the numbers, these great new solar panels lost “only” 8% of their efficiency in 100 days of testing. Compared to the solar panels on my roof, which have lost about 10% of their efficiency in ten YEARS of daily operation, exposed to weather and all.

    Using gravity as energy storage has been kicked around for a long time, too. It’s fine for cuckoo clocks, and pumped storage for hydroelectric generation. Hauling a big slab of heavy to the top of a tall building means having an elevator shaft that gets used just twice a day, stealing floor space. With regenerative elevator motors, this could be done in any tall building tomorrow, as long as you have some guys willing to wheel heavy stuff into the elevators at the right time and place. (Haul a few safes up when energy is available, and let them back down when energy is needed. Try it! No need for new construction, as long as the building is sturdy enough.)

    Lathechuck

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.