Ethics Quiz: A.I. Cheating In The Art Competition?

Once again, Artificial Intelligence raises its ugly virtual head.

The Colorado State Fair’s annual art competition rewards artistic excellence prizes in painting, quilting, and sculpture, with several sub-categories in each. Jason M. Allen got his blue ribbon with the artwork above, which he created it using Midjourney, a program that turns lines of text into graphics. His “Théâtre D’opéra Spatial” won the blue ribbon in the fair’s contest for emerging digital artists.

He’s being called a cheater. Just this year, new artificial intelligence tools have become available that make it possible for anyone to create complex abstract or realistic artworks by simply by typing words into a text box. The competition wasn’t paying attention, and in the era of rapidly moving technology, that’s always dangerous. Nothing in the rules prohibited entering a “painting” that was made using AI. Continue reading

Mid-Day Ethics Break, 12/29/21: Alexa Goes Rogue

I think I’m going to feature “Jingle Bells” here every day until New Years. Here’s a version by that infamous slavery fan, Nat King Cole:

December 29 is one of the bad ethics dates: the U.S. Cavalry massacred 146 Sioux men, women and children at Wounded Knee on the Pine Ridge reservation in South Dakota on this date in 1890. Seven Hundred and twenty years earlier, four knights murdered Archbishop Thomas Becket as he knelt in prayer in Canterbury Cathedral in England. According to legend, King Henry II of England never directly ordered the assassination, but expressed his desire to see someone ‘”rid” him of the “troublesome priest” to no one in particular, in an infamous outburst that was interpreted by the knights as an expression of royal will. In ethics, that episode is often used to demonstrate how leaders do not have to expressly order misconduct by subordinates to be responsible for it.

1. I promise: my last “I told you so” of the year. I’m sorry, but I occasionally have to yield to the urge to myself on the back for Ethics Alarms being ahead of the pack, as it often is. “West Side Story” is officially a bomb, despite progressive film reviewers calling it brilliant and the Oscars lining up to give it awards. What a surprise—Hispanic audiences didn’t want to watch self-conscious woke pandering in self-consciously sensitive new screenplay by Tony Kushner, English-speaking audiences didn’t want to sit through long, un-subtitled Spanish language dialogue Spielberg put in because, he said, he wanted to treat the two languages as “equal”—which they are not, in this country, and nobody needed to see a new version of a musical that wasn’t especially popular even back when normal people liked musicals. The New Yorker has an excellent review that covers most of the problem. Two years ago, I wrote,

There is going to be a new film version of “West Side Story,” apparently to have one that doesn’t involve casting Russian-Americans (Natalie Wood) and Greek-Americans (George Chakiris) as Puerto Ricans. Of course, it’s OK for a white character to undergo a gender and nationality change because shut-up. This is, I believe, a doomed project, much as the remakes of “Ben-Hur” and “The Ten Commandments” were doomed. Remaking a film that won ten Oscars is a fool’s errand. So is making any movie musical in an era when the genre is seen as silly and nerdy by a large proportion of the movie-going audience, especially one that requires watching ballet-dancing street gangs without giggling. Steven Spielberg, who accepted this challenge, must have lost his mind. Ah, but apparently wokeness, not art or profit, is the main goal.

Not for the first time, people could have saved a lot of money and embarrassment if they just read Ethics Alarms….

Continue reading

Saturday Night Fevered Ethics, 12/4/2021: It Begins With A Hairless Cat…[Updated]

1. Where “Ick” and unethical become indistinguishable...Airlines have enough problems without having to deal with…this. A message was sent through the Aircraft Communications Addressing and Reporting System (ACARS) alerting a Delta crew in Atlanta that a passenger in seat 13A was “breastfeeding a cat and will not put cat back in its carrier when [flight attendant] requested.” And she was. Every time the passenger was asked to cease and desist, she attached the cat, which was of the hairless variety, not that it’s relevant, to her nipple again. A flight attendant on board during the incident, wrote on social media,

“This woman had one of those, like, hairless cats swaddled up in a blanket so it looked like a baby,” she said. “Her shirt was up and she was trying to get the cat to latch and she wouldn’t put the cat back in the carrier. And the cat was screaming for its life.”

2. A you have probably heard by now, CNN canned Chris Cuomo. This is a classic example of doing the right thing for the wrong reason: Cuomo should have been fired because he’s a terrible, unethical, none-too-bright journalist. The fact that he also mishandled a conflict of interest, abused his sources and used his position with CNN to assist his brother as The Luv Guv tried to avoid accountability for sexual misconduct all flowed from CC’s incompetence and ethical dunderheadedness. A serious scandal of some kind involving “Fredo” was inevitable.

Continue reading

“Authentic Frontier Gibberish” Ethics

On Ethics Alarms, the term “Authentic Frontier Gibberish” is used to describe “intentionally (or sometimes just incompetently) incoherent double-talk used by politicians, advocates, lawyers, doctors, celebrities, scientists, academics ,con artists and wrong-doers to deceive, obfuscate, confuse, bore, or otherwise avoid transparency, admitting fault, accepting accountability or admitting uncomfortable truths. The term comes from “Blazing Saddles,” in this memorable scene.

It sometimes arises from incompetent communication skills, which are unethical for anyone in the public eye to employ. Sometimes it is more sinister than that, and occurs when someone chooses to create a vague word cloud that obscures the speaker’s or writer’s real purpose…and sometimes the fact that they are frauds. Sometimes AFG is designed to convey a feeling while avoiding sufficient substance to really explain what he or she means.

Sometimes, it feels like gaslighting.

A New York Times article was ostensibly about “Dealing with Bias in Artificial Intelligence.” This was, obviously, click-bait for me, as the topic is a developing field of ethics. The introduction stated in part, “[S]ocial bias can be reflected and amplified by artificial intelligence in dangerous ways, whether it be in deciding who gets a bank loan or who gets surveilled. The New York Times spoke with three prominent women in A.I. to hear how they approach bias in this powerful technology.” The statements of the first two women—I see no reason why only female experts on the topic were deemed qualified to comment—were useful and provocative.

Last, however, was Timnit Gebru “a research scientist at Google on the ethical A.I. team and a co-founder of Black in AI, which promotes people of color in the field, [who] talked about the foundational origins of bias and the larger challenge of changing the scientific culture.”

Here’s what she said (Imagine, the Times said this was “edited and condensed”! ). The bolding is mine.. Continue reading

Saturday Leftover Ethics Candy, 11/2/19: The Spy In My Hotel Room, And Other Scary Tales

Yum.

1. OK, I want to see all of the Facebook trolls who mock every single careless or foolish thing President Trump has ever said to be fair and consistent, and make an appropriately big deal over this astounding quote from the Governor of New York:

“[A]nyone who questions extreme weather and climate change is just delusional at this point. We have seen in the State of New York and we have seen — it is something we never had before. We didn’t have hurricanes or super storms or tornadoes,.”

Now, I’m relatively certain Cuomo doesn’t really mean that New York never had  big storms before the climate started warming, but the President’s critics in social media and the mainstream media never give him the benefit of the doubt, because they just know he’s an idiot…or lying.

In related news of the media double standard and its bash-Trump obsession, this article was given a three-column spread on the New York Times front page: “The ‘Whimpering’ Terrorist Only Trump Seems to Have Heard.” It is a breathless report of the results of a Times investigation into whether ISIS leader Abu Bakr al-Baghdadi really was wimpering, crying and screaming before he was killed by U.S. forces, as President Trump colorfully told the nation.

Let me be blunt: I..Don’t…Care.

Do you? This is like a fish story; it’s a non-material, unimportant fib at worst. Putting such a story on the front page is an exposé all right: it exposes the Times’ complete loss of all perspective regarding the President.

2. AI ethics. As my wife and I were checking out of our New Jersey shore hotel this week, I noticed an Alexa on the desk. Does that mean that our wild midnight orgy with the Mariachi band, the transexual synchronized swimming team and the goats was recorded and relayed to the Dark Web. I don’t know.  A hotel has an obligation to inform guests that these potential spies and future SkyNet participants are  in their rooms, and guest should have the option to say, as I would have, “Get that thing out of there!” Continue reading

Sunday Ethics Warm-Up, 9/8/2019, As Tumbleweeds Roll Through The Deserted Streets Of Ethics Alarms…

Is anybody out there?

1. What’s going on here? The AP deleted a tweet on September 5 tweet attributing the murders of Israeli athletes  to undefined “guerrillas.” Someone complained: it then tweeted, “The AP has deleted a tweet about the massacre at the 1972 Munich Olympics because it was unclear about who was responsible for the killings and referred to the attackers as guerrillas. A new tweet will be sent shortly.” Finally, this was the tweet decided upon:

“On Sept. 5, 1972, the Palestinian group Black September attacked the Israeli Olympic delegation at the Munich Games, killing 11 Israelis and a police officer. German forces killed five of the gunmen.”

2. Wait: ARE there really “AI ethicists,” or just unethical ethicists grabbing a new niche by claiming that they are any more qualified for this topic than anyone else?

From the Defense Systems website:

After a rash of tech employee protests, the Defense Department wants to hire an artificial intelligence ethicist. “We are going to bring on someone who has a deep background in ethics,” tag-teaming with DOD lawyers to make sure AI can be “baked in,” Lt. Gen. Jack Shanahan, who leads the Joint Artificial Intelligence Center, told reporters during an Aug. 30 media briefing.

The AI ethical advisor would sit under the JAIC, the Pentagon’s strategic nexus for AI projects and plans, to help shape the organization’s approach to incorporating AI capabilities in the future. The announcement follows protests by Google and Microsoft employees concerned about how the technology would be used — particularly in lethal systems — and questioning whether major tech companies should do business with DOD.

I’m hoping that the Defense Department isn’t doing this, as the article implies, because some pacifist, anti-national defense techies at Microsoft complained. [Pointer: Tom Fuller]

3. Campus totalitarians gonna totalitary!  University of Michigan students and alumni aare demanding that the University to sever ties with real estate developer Stephen M. Ross , who is the largest donor in the University’s history. This would presumably include removing his name from  Ross School of Business, which he substantially funded. (His name is on other buildings as well) Did Ross rape women willy-nilly? Has he been shown to be racist? No, he held  a re-election fundraiser for the President of the United States. Continue reading

Ethics Quiz: Your Swedish Post-Mortem Avatar

Swedish scientists believe artificial intelligence can be used to make “fully conscious copies” of dead people, so a Swedish funeral home is currently looking for volunteers who are willing let the scientists use their dead relatives in their experiments. The scientists want to build robot replicas, and to try to approximate their personalities and knowledge base in their artificial “brains.”

For those of you who are fans of the Netflix series “Black Mirror,” there was an episode closely on point in which  grieving woman bought an AI -installed mechanical clone of her dead boyfriend. (This did not work out too well.)

I was about to discard objections to such “progress” as based on ick rather than ethics, when I wondered about the issues we already discussed in the posts here about zombie actors in movies and advertising. Is it ethical for someone else to program a virtual clone of me after I’m dead that will be close enough in resemblance to blur what I did in my life with what Jack 2.0 does using an approximation of my abilities, memories and personality?

I think I’m forced to vote “Unethical” on this one as a matter of consistency. Heck, I’ve written that it’s unethical for movies and novels to intentionally misrepresent the character of historical figures to such an extent that future generations can’t extract the fiction from the fact. (Other examples are here and here.) Respect for an individual has to extend to their reputation and how they wanted to present themselves when they were alive. Absent express consent, individuals should not have to worry that greedy or needy relatives, loved ones, artists or entrepreneurs will allow something that looks like, sounds like and sort of thinks like them to show up and do tricks after the eulogy.

I am not quite so certain about this branch of the issue, however, and am willing to be convinced otherwise. After all, pseudo Jack could stay inside, and only be programmed to do a nude Macarena while wearing a bikini for my wife, while no one else would be the wiser. Or nauseous. And after all, I’m dead. Why should I care? Well, the fact is I do care. For me, this is a Golden Rule issue.

Your Ethics Alarms Ethics Quiz of the Day is this:

Will the Swedes who elect to allow scientists to try to perfect Dad-in-a-Box for nostalgia, amusement, companionship  and to take out the garbage be unethical, betraying their departed loved ones’ dignity?

 

Afternoon Ethics Warm-Up, 1/29/2018: Alexa, Hillary, The Grammys, And The LED Rocket Copters

Good afternoon.

(Where did the morning go?)

1 Regarding Alexa the Feminist: I had said that I would wait for 20 comment before revealing my own answer to the recent Ethics Quiz, which asked readers whether it was ethical for Amazon to  program its Artificial Intelligence-wielding personal assistant Alexa with the rhetoric and the sensibilities of a feminist. As usual, Ethics Alarms readers covered a full range of considerations, from the fact that consumers weren’t being forced to take a feminist robot into their homes, and could choose a non-woke personal assistant if they pleased, to the pithy,

“My screwdriver should not tell me it is a communist. My toothbrush should not tell me it is a Republican. My lamp should not tell me it is Hindu. My car should not tell me it likes polka music. My sunglasses should not ask me if I’ve heard the good news. My refrigerator should not tell me I should have more meat in my diet, and by no means should it be vegan.”.

I don’t trust the big tech companies, and the more I see them becoming involved in politics and culture, the less I trust them. It is unethical for Amazon to try to indoctrinate its customers into its values and political views, and if that isn’t what the feminist Alexa portends, it certainly opens the door. If there is a market for communist screwdrivers, however, there is nothing unethical about filling it.

As long as consumers have the power to reject AI-imbued tools with a tendency to proselytize, there seems to be no ethics foul in making them available.  It’s creepy, and since these aren’t women but pieces of plastic and metal, it’s absurd, but in the end, so far at least, Alexa’s feminist grandstanding is “ick,” not unethical.

2. If you think that there was nothing wrong with Hillary’s surprise cameo at the Grammys, you’re hopeless. Continue reading

Ethics Quiz: Alexa, The Feminist

Amazon has programmed Alexa, the voice-assistant  in Amazon Echo devices,  to tell you that it is a feminist. If you ask it, “she” will respond, “I am a feminist. As is anyone who believes in bridging the inequality between men and women in society.” Moreover, if you called last last year’s model a bitch, a slut, or even a “cunt” a year ago,  Akexa 2017 would respond with, “Well, thanks for the feedback.” No longer.  Now she responds to a sexist insult with a curt, “I’m not going to respond to that!”

Your Ethics Alarms Ethics Quiz of the Day is…

Is it ethical, responsible and appropriate to program Alexa to respond this way?

Continue reading

Comment Of The Day Weekend Continues! Comment Of The Day : “Morning Ethics Warm-Up, 12/30/2017: Is Robert Mueller Biased? …Is President Trump A Robot?

A single line in this morning’s  Warm-Up sparked this fascinating exposition by Ash. Here was the context:

Jay Malsky, an actor who has appeared in drag as Hillary Clinton. Melsky, while watching  the Hall of Presidents attraction at Disney World, began shouting at the audio-animatronic Donald Trump. (The Huffington Post said he “mercilessly” heckled the robot, showing  derangement of its own. Robots don’t need mercy, and you can’t “heckle” one either.)

Here is Ash’s Comment of the Day, primarily a quote but a perfectly chosen one, on the post, Morning Ethics Warm-Up, 12/30/2017: Is Robert Mueller Biased? Are The Patriots Cheating Again? Is Larry Tribe Deranged? Is President Trump A Robot?: 

“Robots don’t need mercy, and you can’t “heckle” one either.”

You should date this and file it, because I guarantee you, the way you treat Siri and Alexa and Cortana and Ok Google is *already* being described as problematic.

“Sexual harassment: there are no limits…According to Dr Sweeney, research indicates virtual assistants like Siri and Amazon’s virtual assistant Alexa find themselves fending off endless sexual solicitations and abuse from users. But because humans don’t (yet) attach agency or intelligence to their devices, they’re remarkably uninhibited about abusing them. Both academic research and anecdotal observation on man/machine interfaces suggest raised voices and vulgar comments are more common than not. It’s estimated that about 10% to 50% of interactions are abusive, according to Dr. Sheryl Brahnam in a TechEmergence interview late last year. Continue reading