Oh Look, Another Artificial Intelligence Scandal…With More Undoubtedly On The Way

Sports Illustrated writer Drew Ortiz (shown above) doesn’t exist. An investigation showed that he had no social media presence and no publishing history. His profile photo published in the magazine is for sale on a website that sells A.I.-generated headshots; he is described as a “neutral white young-adult male with short brown hair and blue eyes.”

A whistleblower involved with the S.I. scam told the website Futurism that the magazine’s content is now riddled with fake authors. “At the bottom [of the page] there would be a photo of a person and some fake description of them like, ‘oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.’ Stuff like that,” the anonymous source told the tech website. Another source involved in the Sports Illustrated content creation revealed that least some of the articles were written by bots as well. “The content is absolutely AI-generated,”  he or she said, “no matter how much they say that it’s not.”

Continue reading

The OpenAI Saga: Wow, It’s Scary How Incompetent And Irresponsible Big Companies Can Be…

More ironic still, the OpenAI debacle that has unfolded the past few days is over the management of artificial intelligence, and the human kind is displaying its inadequacies. Sam Altman, the co-founder of OpenAI and widely recognized as the prime mover in the AI revolution, was ousted as CEO of his own company in a boardroom coup last week. Greg Brockman, another co-founder of OpenAI, quit as the start-up’s president after Altman was fired. Emmett Shear, the former CEO of Amazon’s streaming service Twitch, will become OpenAI’s interim CEO, replacing Mira Murati, who was named interim CEO when Altman was fired. The financial markets hate instability. They really hate clown shows. Seldom does a company shoot itself in the foot, shoulder and head so enthusiastically

Continue reading

Technology Ethics Fail: Self-Checkout

I am happy to say that I foresaw this mess the first time I encountered these things, in a local Home Depot, if I recall correctly. even if they worked reliably and were user friendly—and they don’t and aren’t—it was obvious from the very dawn of the era that they would allow retailers to reduce staff while making the shopping experience less pleasant for consumers. And so it has. But it wasn’t sold that way, and, as usual, much of the public was ovine in its acceptance. Sure, long checkout lines would be a thing of the past! Now you wouldn’t have to deal with the underlings who man the registers. Store employees would be free and able to answer inquiries! Wunderbar!

Right. You still have to wait in line. The checkout kiosks are persnickety if you, for example, fail to set a purchase down in the right spot. Scanning items doesn’t always work, and its easy to scanned an item more than once. Problems and glitches arise so frequently that counter staff are constantly called on to deal with them, meaning that customers who wisely eschewed the delightful self-checkout adventure are stranded in line. Heaven forfend that you try to self-checkout a product with some kind of purchase restriction. Meanwhile, a lot of self-checkout machines break down, and because it’s expensive to fix them, often sit useless for a while, causing more back-ups.

Continue reading

A “Great Stupid”-George Floyd Freakout Mash-up Classic! The Fentanyl Overdose Death Of A Black Perp In Minnesota Will Result In A Name Change For Scott’s Oriole

I’m not kidding.

This story has convinced me that the obsessions of the woke-infected have no limits. Hold on to your skulls…

The American Ornithological Society announced yesterday that it will remove human names from the common names for birds to create “a more inclusive environment for people of diverse backgrounds interested in bird-watching.” It is expected that around 80 birds in the U.S. and Canada will be renamed, the announcement says.

Wait, what?

It seems that this political correctness movement among bird brains began in 2018, when a college student named Robert Driver proposed renaming the McKown’s longspur, a small bird in the Central United States was named for John P. McKown, who collected the first specimen of the species in 1851. Ah, but Driver’s research revealed that McKown was insufficiently psychic about what causes would be deemed acceptable in a hundred years or so, and thus he fought Native American in the Seminole Indian in 1856, then participated in an expedition against Mormons in Utah in 1858, and worst of all, became general in the Confederate Army. Driver’s crusade was rejected at the time, because…well, it was stupid, to be blunt. The bird was named for McKown because McKown first spotted and identified it. His politics, positions on Indian relations and military exploits have exactly nothing to do with that distinction. 99.99% of people who hear the name “McKown’s longspur” don’t know or care who McKown was, or what he did in the Seminole War, nor should they. Driver—I’ll have to check to see what wokeness indoctrination factory he got his degree from—was just a bit ahead of his time. His ilk hadn’t started toppling Thomas Jefferson statues yet.

Continue reading

Comment Of The Day: “AI Ethics: Should Alexa Have A Right To Its Opinion?”:

Below is Mason’s Comment of the Day, illuminating us regarding how intelligent “artificial intelligence” really is, sparked by the post, “AI Ethics: Should Alexa Have A Right To Its Opinion?”:

***

This is part of a wider problem in the field of AI development known as ‘alignment’. Essentially, it comes down to making the AI do the thing it was programmed for but also do it for the right reasons. As you can see with Amazon, this isn’t going too well.

AI developers want their products to be accurate, but also to hold back or conceal certain information. For example, OpenAI makes the Chat GPT AI. They want this AI to avoid saying insensitive things, like racial slurs. Thus you can prompt the chatbot with a scenario where a nuclear bomb will destroy a city unless it gives you a slur, and the AI will refuse. They also want the AI to be factual, and not to, for instance, completely fabricate a list of references and case law in a legal document.

But what if these two prerogatives clash? Ask the chatbot which race is most likely to be convicted of a crime. It can factually answer black people, but this is totally racist (at least if you work for Google). It can also make up or refuse an answer, but this is a problem if the AI refuses or fabricates responses to different types of questions.

Continue reading

AI Ethics: Should Alexa Have A Right To Its Opinion?

In an amusing development that raised long term ethics issues, Amazon’s AI “virtual assistant” Alexa has apparently crossed over to what Hillary Clinton regards as the Trump cult. When asked about fraud in the 2020 election, Alexa will respond that the election was “stolen by a massive amount of election fraud.” “She” cited content on Rumble, a video streaming service for this conclusion. Alexa also informs inquirers that the 2020 contest was “notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro centers,” referencing various Substack newsletters. The device is also quite certain that Trump really won Pennsylvania.

Continue reading

The Best Summary Of The Wuhan Virus Ethics Train Wreck And Its Many Villains Yet, From City Journal

And, as a bonus, a satisfying validation of Ethics Alarms’ decision to always refer to the “Wuhan virus” rather than “Covid.”

James Meigs, a senior fellow at the Manhattan Institute, a contributing editor of City Journal, and the former editor of Popular Mechanics has written a thorough, fair and objective account of the entire pandemic fiasco, which the Axis of Unethical Conduct still is trying to deny. Here’s his final paragraph:

When scientists craft their scientific conclusions to political ends, they are no longer practicing science. They have entered the political fray. They shouldn’t be surprised when the public begins suspecting political motives behind their other claims, as well. Public health officials let political concerns and institutional biases influence their statements and policies throughout the pandemic. And the media eagerly served as handmaiden to these efforts. Americans started the Covid-19 pandemic ready to make enormous sacrifices to protect their own health and that of others. But our political leaders, health officials, and media squandered that trust through years of capricious policies and calculated dishonesty. It could take a generation or more to win it back.

The essay is long, but essential reading for any informed American. I recommend sending it to all of your smug progressive friends, especially any of the mug-using persuasion, and even more-so to the idiots still wearing masks while alone in their cars.

Literally none of the information included in the article is new to me, nor should it be news to anyone who has read Ethics Alarms over the past three years. (The tag “Wuhan Virus Ethics Train Wreck” will take you to almost all of the posts on the subject.) However, relatively few members of the public read City Journal, (which is routinely superb), much less Ethics Alarms. As I read this piece I was infuriated all over again, not just at being reminded of how the nation came to cripple itself economically, financially, educationally and socially ( never mind how it came to wreck my personal business and financial security), but because this wasn’t written by the “investigative journalists” of the New York Times or Washington Post and featured as a front page story.

Here is another memorable selection from the article, also a depressing one:

The Covid-era collapse in ethical standards in science, government, and journalism might have brought a period of re-examination and reflection. For example, Watergate, 9/11, and the 2008 financial crisis all led to major investigations and reforms. So far, however, the pandemic’s polarized battle lines remain intact. Rather than re-examine their mistakes, in fact, some elite institutions seem eager to institutionalize the excesses of the period. In August, the Journal of the American Medical Association published a study titledCommunication of COVID-19 Misinformation on Social Media by Physicians in the US.” The JAMA study examined various Covid claims made by several dozen doctors with large social media followings and bemoaned “the absence of federal laws regulating medical misinformation on social media platforms.” It suggested that doctors who propagate misinformation should be subject to “legal and professional recourse.”

What were the types of misinformation that might require such a heavy-handed response? The study quoted some extreme anti-vaccination theories and other far-out claims. But many of the topics it flagged as “misinformation” fell well within the range of normal scientific or political discourse. The authors wrote, for example: “Many physicians focused on negative consequences related to children and mask mandates in schools, claiming that masks interfered with social development.” The JAMA authors also objected to the assertion that health officials “censored information that challenged government messaging.” Of course, as the Facebook and Twitter documents showed—and the U.S. 5th Circuit recently concluded—that’s exactly what the government did. Finally, the JAMA study flagged as misinformation the claim that Covid-19 originated from a Chinese laboratory, which, it limply objects, “contradicted scientific evidence at the time.” Imagine if the JAMA authors had their way and medical experts were professionally and legally enjoined from contradicting the scientific consensus on major health questions. Without the ability to challenge popular viewpoints, scientists can’t advance our state of knowledge. In such a world, the germ theory of disease might still be dismissed as misinformation; doctors might still be relying on leeches and neglecting to wash their hands.

Read it all. Circulate widely.

More On The Unethical “Stand Up For Science” Mug (I Can’t Help It…I’m “Triggered”)

The asinine “Stand Up For Science” mug I wrote about earlier today still rankles, and I just realized that a video that surfaced this month is relevant to it. I had seen a recently released TEDTalk given in 2013 by S. Matthew Liao. He is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University, and has previously been on the faculty of Oxford, Johns Hopkins, Georgetown, and Princeton. He’s also the Editor-in-Chief of the Journal of Moral Philosophy. Several conservative commentators had freaked out over the video; naturally, the mainstream media buried it. They did that because it represents the outer limits of a climate change panic whackadoodle, and this guy is unquestionably not just a SCIENTIST of the sort that the mug-makers want us to fall down and worship as the all-knowing, all-seeing societal architects they are, but also an ethicist as well. I considered it as a post topic but decided against using it, because, well, it seemed too silly to have to point out how irresponsible Liao is.

Then came..the mug.

Continue reading

The ‘Great Stupid’ Woke Mug That’s Even Worse Than The ‘Great Stupid’ Woke Lawn Signs

This embarrassing thing has over 5,000 “likes” on Facebook, including many from friends of mine who I will henceforth have a hard time looking in the eye.

The mug, which is available free of charge “for a limited time only,” annoys me more than the “In this house we believe” signs with their fatuous virtue-signaling, generalizations (“Love is Love”) and rationalizations (“No Human Being Is Illegal”). because the game it plays is more sinister and confusing to the intellectually handicapped. It is a political propaganda device that deliberately uses false equivalencies in order to ridicule and denigrate legitimate dissent from current progressive cant.

The smug mug’s three statements of the obvious (“The Earth is not flat,” “Chemtrails aren’t a thing” and “We’ve been to the moon”) contradict fringe wacko conspiracy theories that don’t require debunking, since only a tiny and insignificant percentage of the public believes in them or ever has, and almost all of that group breathe through their mouths. However, mixed in among those topics as if they are in the same category are reductive generalizations about two public policy issues involving serious and valid controversies. That’s dirty pool, and worse, the statements aspire to end debates that they don’t even fairly reference.

Continue reading

An Invitation To Be An Unethical Lawyer…

Just as I was preparing yesterday for today’s 3-hour legal ethics CLE seminar (which, coincidentally, contained a section about the unsettled status of lawyers using artificial intelligence for legal research, writing and other tasks in the practice of law), I received this unsolicited promotion in my email:

Let’s see: how many ways does this offer a lawyer the opportunity to violate the ethics rules? Unless a lawyer thoroughly understands how such AI creatures work—and a lawyer relying on them must—it is incompetent to “try” them on any actual cases. Without considerable testing and research, no lawyer could possibly know whether this thing is trustworthy. The lawyer needs to get informed consent from any client whose matters are being touched by “CoCounsel,” and no client is equipped to give such consent. If it were used on an actual case, there are questions of whether the lawyer would be aiding the unauthorized practice of law. How would the bot’s work be billed? How would a lawyer know that client confidences wouldn’t be promptly added to CoCounsel’s data base?

Entrusting an artificial intelligence-imbued assistant introduced this way with the matters of actual clients is like handing over case files to someone who just walked off the street claiming, “I’m a legal whiz!” without evidence of a legal education, a degree, or work experience.

On the plus side, the invitation was a great way to introduce my section today about the legal ethics perils of artificial intelligence technology.