The A.I. Ethics Problem in News Reporting

Guest post by Matthew B.

JM Introduction: This excellent post arrived on yesterday’s open forum, and thus was immediately eligible for guest column status. It is especially timely, both because of this story from the legal ethics jungle and this more alarming one:

The top United States Army commander in South Korea revealed to reporters this week that he has been using a chatbot to help with decisions that affect thousands of U.S. soldiers. Major General William “Hank” Taylor told the media in Washington, D.C., that he is using AI to sharpen decision-making, but not on the battlefield. The major general — the fourth-highest officer rank in the U.S. Army — is using the chatbot to assist him in daily work and command of soldiers.

Speaking to reporters at a media roundtable at the annual Association of the United States Army conference, Taylor reportedly said “Chat and I” have become “really close lately.”

Great. What could go wrong? Now here’s Matthew…

***

One of the problems with AI is how often it is confidently wrong. This manifests itself all over the place. One of the most troubling is in the news industry. The news industry under tremendous financial pressure, and the appeal of moving towards AI generated content opens them up to completely BS stories spreading.

There are several great recent examples.

Continue reading