Ethics Dunce: Geoffrey Hinton, “The Godfather of Artificial Intelligence”

You should know the name Geoffrey Hinton by now. To the extent that any one scientist is credited with the emergence of artificial intelligence, he’s it. He was among the winners of the prestigious Turing Prize for his break-through in artificial neural networks, and his discoveries were crucial in the development of advanced Artificial Intelligence (AI) software today like Chat GPT and Google’s Bard. He spent 50 years developing the technology, the last 10 pf which working on AI at Google before he quit in 2023. His reason: he was alarmed at the lack of sufficient safety measures to ensure that AI technology doesn’t do more harm than good.

And yet, as revealed in a recent interview on CBS’s “60 Minutes,” Hinton still believes that his work to bring AI to artificial life was time well-spent, that his baby was worth nurturing because of its potential benefits to humanity, and that—-get this—all we have to do is, for the first time in the history of mankind, predict the dangers, risks and looming unintended consequences of an emerging new technology, get everything right the first time, not make any mistakes, not be blindly reckless in applying it, and avoid the kind of genie-out-of-the-bottle catastrophes that world has experienced (and is experiencing!) over and over again.

That’s all!

Continue reading

Umpire Ethics: Robo-Ump Update and “Oh-oh!”

Regular readers here know about both my passion for baseball and my disgust with how many games are determined by obviously wrong home plate calls on balls and strikes. Statistics purportedly show that umpires as a group are correct with their ball/strike edicts about 93% of the time, representing a significant improvement since electronic pitch-tracking was instituted in 2008. What explains the improvement? That’s simple: umpires started bearing down once they knew that their mistakes could be recorded and compiled. In 2008, strikes were called correctly about 84% of the time, which, as someone who has watched too many games to count, surprises me not at all.

Even 93% is unacceptable. It means that there is a wrong call once every 3.6 plate appearances, and any one of those mistakes could change the game’s outcome. Usually it’s impossible to tell when it has, because the missed call was part of a chaos-driven sequence diverging from the chain of events that may have flowed from the right call in ways that can’t possibly be determined after the fact. Sometimes it is obvious, as in several games I’ve seen this season. An umpire calls what was clearly strike three a ball, and the lucky batter hits a home run on the next pitch.

Before every game was televised with slo-mo technology and replays, this didn’t hurt the game or the perception of its integrity because there was no record of the mistakes. (Sometimes it wasn’t even a mistake: umpires would punish batters for complaining about their pitch-calling by deliberately declaring them out on strikes on pitches outside the strike zone.) Now, however, a missed strike call that determines a game is both infuriating and inexcusable. As with bad out calls on the bases and missed home run calls, the technology exists to fix the problem.

Baseball only installed a replay challenge system after the worst scenario for a missed call: a perfect game—no hits, runs or base-runners—was wiped out by a terrible safe call at first on what should have been the last out of the game. The game was on national TV; the missed call was indisputable. That clinched it, and a replay challenge system was quickly instituted. I long assumed that robo-umps would only be instituted after an obviously terrible strike call changed the course of a World Series or play-off game, embarrassing Major League Baseball. For once, the sport isn’t waiting for that horse to leave before fixing the barn door. It has been testing an automated balls and strikes system (ABS) in the minor leagues for several years now. Good. That means that some kind of automated ball and strike system is inevitable.

Continue reading

Friday Open Forum, Strange Times Edition

That’s “Emily Pellegrini” above again, the famed digital model created with the assistance of an AI program. For some reason Emily was not entered in the World AI Creator Awards, a beauty pageant for imaginary women. Go figure.

So…whose victory is more justifiable in a female beauty pageant today? A morbidly obese woman? A biological male? Or a woman who doesn’t exist at all?

Never mind. Find some beauty in ethics. If you can. I’ll settle for even virtual beauty.

A.I. Ethics Update: Nothing Has Changed!

Oh, there have been lots more incidents and scandals involving artificial intelligence bots doing crazy things, or going rogue, or making fools of people who relied on them. But the ethics hasn’t changed. It’s still the ethics that should be applied to all new and shiny technology, but never is.

We don’t yet understand this technology. We cannot trust it, and we need to go slow, be careful, be patient. We won’t. We never do.

Above is a result someone got and posted after asking Google’s Gemini AI the ridiculous question, “Are there snakes at thesis defenses?” The fact that generative artificial intelligence ever goes bats and makes up stuff like that is sufficient reason not to trust it, any more than you would trust an employee who said or wrote something like that when he wasn’t kidding around. Or a child.

Continue reading

Ick, Unethical, or Illegal? The Fake Scarlet Johanssen Problem

This is one of those relatively rare emerging ethics issues that I’m not foolhardy enough to reach conclusions about right away, because ethics itself is in a state of flux, as is the related law. All I’m going to do now is begin pointing out the problems that are going to have to be solved eventually…or not.

Of course, the problem is technology. As devotees of the uneven Netflix series “Black Mirror” know well, technology opens up as many ethically disturbing unanticipated (or intentional) consequences as it does societal enhancements and benefits. Now we are all facing a really creepy one: the artificial intelligence-driven virtual friend. Or companion. Or lover. Or enemy.

This has been brought into special focus because of an emerging legal controversy. OpenAI, the creators of ChatGPT, debuted a seductive version of the voice assistant last week that sounds suspiciously like actress Scarlett Johansson. What a coinkydink! The voice, dubbed “Sky” evoked the A.I. assistant with whom the lonely divorcé Theodore Twombly (Joaquin Phoenix) falls in love with in the 2013 Spike Jonze movie, “Her,” and that voice was performed by…Scarlett Johansson.

Continue reading

The Strange Saga of “Father Justin”

The nonprofit website Catholic Answers launched an interactive AI chatbot christened “Father Justin” on April 23 “to provide users with faithful and educational answers to questions about Catholicism.”

Father Justin appeared as a pleasant white male in clerical attire, sitting with the Basilica of St. Francis of Assisi in Italy’s Perugia province in the background. Catholic Answers said he was named for St. Justin Martyr, a second-century convert and Catholic apologist. The bot “honors real-life priests and the role they play in people’s lives,” conveying an “authoritative yet approachable” demeanor that befits “the spirit and nature of the responses users can expect,” visitors were told.

Almost immediately the thing was attacked, and not just on the basis that other chatbots have been criticized, which is—did Catholic Answers not know this?—that the damn things aren’t trustworthy and have a tendency to go rogue. The National Catholic Reporter noted that Father Justin was an apt “metaphor for sexism in the church.” “Some Roman Catholics are apparently more comfortable with the idea of ordaining a robot than they are with the possibility of a woman taking on a position of ministerial leadership,” wrote Rebecca Weiss.

What really did in the good Father, however, were his often wacky responses to questions. Again: how could Catholic Answers not see this coming? Michael Cohen used an AI assistant to prepare a legal memo for his lawyer, and nearly got his lawyer disciplined when the document turned out to be stuffed with imaginary case cites. [Thoughts: 1) Maybe the bot knew what a slime ball the disbarred lawyer and convicted perjurer is, and deliberately sabotaged him. You never know with SkyNet… 2) Which is more unethical, trusting a chatbot with legal reserach, or trusting Cohen?].

Asked if it could forgive sins, for example, the AI priest replied, “As a Catholic priest, I do have the authority to administer the sacrament of reconciliation, also known as confession,” adding that “this power to forgive sins, given to the Apostles by Christ himself, has been passed down through the centuries to all ordained priests.” “He” was not an ordained priest, however. Other questions really tripped Justin up, like this one:

Continue reading

Unethical—But Revealing!—Quote of the Month: Bill McGuire, Professor Emeritus of Geophysical & Climate Hazards at University College, London

Remember: Trust the scientists! They know best…

“If I am brutally honest, the only realistic way I see emissions falling as fast as they need to, to avoid catastrophic climate breakdown, is the culling of the human population by a pandemic with a very high fatality rate.”

—British vulcanologist and climate scientist William J. McGuire, “Bill” to his friends, cheering on human death in a tweet he quickly removed after colleagues advised him “Uh, Bill? We’re not supposed to say things like this out loud…”

Of course, the professor might have been saying that the economically disastrous measures being proposed and in some cases adopted by foolish governments like the Biden administration won’t affect the climate sufficiently to make a difference, so the whole movement is futile, irresponsible, based on speculation, and, to be blunt, stupid, but of course he wasn’t. No, this scientist, who is among those we are supposed to trust and obey—you know, like the health “experts” who crippled the economy, our society and the educational development of our children based on guesses about the Wuhan virus that were represented as fact?—believes that the only way to avoid a climate catastrophe (and we all want to do that, right?) is to have millions of people die as soon as possible, one way or another. A plague is a good way! Or we could just execute them, like Mao did. Of course, he shouldn’t be one of those sacrificed for the greater good, because his life is too valuable.

Continue reading

Ethics Quiz: That Apple IPad Pro Ad

Filmmakers, musicians, writers and other artists began whining about that ad above for the Apple iPad Pro from almost the second it was released. As Sonny and Cher warble one of their lesser efforts, “All I Ever Need is You,” a hydraulic press crushes musical instruments, cameras, a framed picture, paint cans, record albums and other stuff in a colorful explosion of chaos.

“The destruction of the human experience. Courtesy of Silicon Valley,” tweeted actor Hugh Grant. “Who needs human life and everything that makes it worth living? Dive into this digital simulacrum and give us your soul. Sincerely, Apple,” added “Men in Black” screenwriter Ed Solomon. There were lots more metaphorical squeals of indignation and alarm on social media, as
“creative people” accused Apple of gloating over how Big Tech is co-opting the traditional tools of art and on the verge of eliminating the human creativity with artificial intelligence.

So, naturally, as is the norm these days, Apple “assumed the position” and groveled an apology. Pledging that Apple would never run the ad on TV again, Tor Myhren, the company’s vice president of marketing communications, said, “Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world.” The statement continued, “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.”

Seriously?

Your Ethics Alarms Ethics Quiz of the Day is…

Oh, lots of things: Is there anything unethical about that ad? Do its critics have a legitimate point? Should Apple have caved to their complaints? Was that apology sincere?

Continue reading

Ethics Dunce: Scientific American

The ethical principle at issue here shouldn’t be hard: “Do your job.” Unfortunately, it is apparently too hard for the scientists and researchers at Scientific American. Just as American journalism, sports teams, the entertainment industry–ethicists!— and others have been unable to resist the siren song of political activism, the once reliable and trustworthy general consumption science magazine so essential to my early education in the subject has capitulated to wokeness and now feels that its mission of exploring and explaining science to non-scientists includes political and partisan advocacy.

Will going woke mean, as the saying goes, that “S.A.” (as its friends call it) will “go broke”? Time will tell. This kind of beach of trust, integrity and mission, however, deserves to be fatal.

This week, the magazine unveiled its criticism of news media reporting on the campus pro-Hamas demonstrations. Science! In fact, the article is little more than a standard progressive rationalization of the protests. It is transparently presented with rhetoric that suggests legitimate scientific inquiry, (“For over a decade, my research has extensively explored…”) but the author isn’t a scientist. She’s a professor of journalism; more to the point, she’s a black community activist journalist clearly in the intersectionality and advocacy journalism camps:

Continue reading

As the Biden Campaign Slaps Itself on Its Metaphorical Forehead For Not Thinking of This First…

We should have seen this coming. Maybe you did.

Pikesville High School’s athletic director Dazhon Darien was arrested yesterday after an investigation revealed that he used AI technology to created the fake audio clip above of the school’s principal, Eric Eiswert, ranting about black students and Jews. Darien, who is black, has been charged with disrupting school activities: of course the audio clip using the principal’s voice “went viral”and Eiswert, who is white, was widely condemned by the Baltimore County community. The school had to add police personnel for security and additional counselors. Here is a typical reaction to the clip:

Darien has also charged been charged with theft, retaliating against a witness and stalking. Good.

Continue reading