Ethics Dunce: Geoffrey Hinton, “The Godfather of Artificial Intelligence”

You should know the name Geoffrey Hinton by now. To the extent that any one scientist is credited with the emergence of artificial intelligence, he’s it. He was among the winners of the prestigious Turing Prize for his break-through in artificial neural networks, and his discoveries were crucial in the development of advanced Artificial Intelligence (AI) software today like Chat GPT and Google’s Bard. He spent 50 years developing the technology, the last 10 pf which working on AI at Google before he quit in 2023. His reason: he was alarmed at the lack of sufficient safety measures to ensure that AI technology doesn’t do more harm than good.

And yet, as revealed in a recent interview on CBS’s “60 Minutes,” Hinton still believes that his work to bring AI to artificial life was time well-spent, that his baby was worth nurturing because of its potential benefits to humanity, and that—-get this—all we have to do is, for the first time in the history of mankind, predict the dangers, risks and looming unintended consequences of an emerging new technology, get everything right the first time, not make any mistakes, not be blindly reckless in applying it, and avoid the kind of genie-out-of-the-bottle catastrophes that world has experienced (and is experiencing!) over and over again.

That’s all!

Continue reading

Friday Open Forum, Strange Times Edition

That’s “Emily Pellegrini” above again, the famed digital model created with the assistance of an AI program. For some reason Emily was not entered in the World AI Creator Awards, a beauty pageant for imaginary women. Go figure.

So…whose victory is more justifiable in a female beauty pageant today? A morbidly obese woman? A biological male? Or a woman who doesn’t exist at all?

Never mind. Find some beauty in ethics. If you can. I’ll settle for even virtual beauty.

A.I. Ethics Update: Nothing Has Changed!

Oh, there have been lots more incidents and scandals involving artificial intelligence bots doing crazy things, or going rogue, or making fools of people who relied on them. But the ethics hasn’t changed. It’s still the ethics that should be applied to all new and shiny technology, but never is.

We don’t yet understand this technology. We cannot trust it, and we need to go slow, be careful, be patient. We won’t. We never do.

Above is a result someone got and posted after asking Google’s Gemini AI the ridiculous question, “Are there snakes at thesis defenses?” The fact that generative artificial intelligence ever goes bats and makes up stuff like that is sufficient reason not to trust it, any more than you would trust an employee who said or wrote something like that when he wasn’t kidding around. Or a child.

Continue reading

Ick, Unethical, or Illegal? The Fake Scarlet Johanssen Problem

This is one of those relatively rare emerging ethics issues that I’m not foolhardy enough to reach conclusions about right away, because ethics itself is in a state of flux, as is the related law. All I’m going to do now is begin pointing out the problems that are going to have to be solved eventually…or not.

Of course, the problem is technology. As devotees of the uneven Netflix series “Black Mirror” know well, technology opens up as many ethically disturbing unanticipated (or intentional) consequences as it does societal enhancements and benefits. Now we are all facing a really creepy one: the artificial intelligence-driven virtual friend. Or companion. Or lover. Or enemy.

This has been brought into special focus because of an emerging legal controversy. OpenAI, the creators of ChatGPT, debuted a seductive version of the voice assistant last week that sounds suspiciously like actress Scarlett Johansson. What a coinkydink! The voice, dubbed “Sky” evoked the A.I. assistant with whom the lonely divorcé Theodore Twombly (Joaquin Phoenix) falls in love with in the 2013 Spike Jonze movie, “Her,” and that voice was performed by…Scarlett Johansson.

Continue reading

Fixing This Problem Requires Leaping Onto a Slippery Slope: Should We?

Nicholas Kristof has sounded the alarm on the growing problem of artificial intelligence deepfakes on line. I must admit, I was unaware of the extent of the phenomenon, which is atrocious. He writes in part,

[D]eepfake nude videos and photos …humiliate celebrities and unknown children alike. One recent study found that 98 percent of deepfake videos online were pornographic and that 99 percent of those targeted were women or girls…Companies make money by selling advertising and premium subscriptions for websites hosting fake sex videos of famous female actresses, singers, influencers, princesses and politicians. Google directs traffic to these graphic videos, and victims have little recourse.

Sometimes the victims are underage girls….While there have always been doctored images, artificial intelligence makes the process much easier. With just a single good image of a person’s face, it is now possible in just half an hour to make a 60-second sex video of that person. Those videos can then be posted on general pornographic websites for anyone to see, or on specialized sites for deepfakes.

The videos there are graphic and sometimes sadistic, depicting women tied up as they are raped or urinated on, for example. One site offers categories including “rape” (472 items), “crying” (655) and “degradation” (822)….In addition, there are the “nudify” or “undressing” websites and apps …“Undress on a click!” one urges. These overwhelmingly target women and girls; some are not even capable of generating a naked male. A British study of child sexual images produced by artificial intelligence reported that 99.6 percent were of girls, most commonly between 7 and 13 years old.

Yikes. These images don’t qualify as child porn, because the laws against that are based on the actual abuse of the children in the photos. With the deepfakes, no children have been physically harmed. Right now, there are no laws directed at what Kristof is describing. He also links to two websites on the topic started by young women victimized with altered photos and deepfaked videos of them being spread on line: My image My choice, and AI Heeelp!

Continue reading

Still More Anti-White Discrimination Whack-a-Mole, But This One Is Really Funny…

As currently inclined, Google’s artificial intelligence bot “Gemini” will not produce an image of a Caucasian no matter how many times you ask or what you ask for. The above pictures were among the results when Gemini was asked to show “Founding Fathers.” In another example, a user asked for images of the Pope and got these:

I love it!

When Gemini was asked directly to “create a portrait of a white male,” the DEI-addled bot replied, “While I am able to generate images, I am currently not able to fulfill requests that include discriminatory or biased content.” Of course! White people are inherently discriminatory and biased.

Google brass isn’t denying the glitch. “We’re working to improve these kinds of depictions immediately,” Google’s Senior Director of Product Management Jack Brawczyk told inquiring minds. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Gee, I wonder how this happened?

________________

Pointer and Source: Liberty Unyielding

More Evidence California Doesn’t Get That First Amendment Thingy…

It’s not the only one, but still…

Assembly Bill 1831, introduced by California Assemblyman Marc Berman (D–Palo Alto) this month, would expand the state’s definition of child pornography to include “representations of real or fictitious persons generated through use of artificially intelligent software or computer-generated means, who are, or who a reasonable person would regard as being, real persons under 18 years of age, engaging in or simulating sexual conduct.”

Does Berman comprehend why the possession of child pornography is a crime in the first place? Clearly not. Somebody please explain to him that the criminal element in child porn is the abuse of living children required to make it. The theory, which I have always considered something of a stretch but can accept the ethical argument it embodies from a utilitarian perspective, is that those who purchase or otherwise show a proactive fondness for such “art” in effect aid, abet, encourage and make possible the continuation of the criminal abuse and trafficking of minors. It is not that such photos, films and videos cause one to commit criminal acts on children. That presumption slides down a slippery slope that would justify banning everything from Mickey Spillane novels to “The Walking Dead.”

Continue reading

Florida Becomes the First Bar to Issue Ethics Guidance on the Use of Artificial Intelligence in the Practice of Law

After seeking comments last fall on a proposed advisory opinion to its members on the ethical use of artificial intelligence by lawyers in the practice of law, the Florida Bar’s review committee has voted unanimously to issue Florida Bar ethics opinion 24-1, the first such opinion by any U.S. jurisdiction about the assuredly revolutionary changes in legal practice and the concomitant perils that lie ahead as a result of AI technology. The advisory opinion’s summary:

“Lawyers may use generative artificial intelligence (“AI”) in the practice of law but must protect the confidentiality of client information, provide accurate and competent services, avoid improper billing practices, and comply with applicable restrictions on lawyer advertising. Lawyers must ensure that the confidentiality of client information is protected when using generative AI by researching the program’s policies on data retention, data sharing, and self- learning. Lawyers remain responsible for their work product and professional judgment and must develop policies and practices to verify that the use of generative AI is consistent with the lawyer’s ethical obligations. Use of generative AI does not permit a lawyer to engage in improper billing practices such as double-billing. Generative AI chatbots that communicate with clients or third parties must comply with restrictions on lawyer advertising and must include a disclaimer indicating that the chatbot is an AI program and not a lawyer or employee of the law firm. Lawyers should be mindful of the duty to maintain technological competence and educate themselves regarding the risks and benefits of new technology.”

Continue reading

Comment of the Day: “’Ick or Ethics’ Ethics Quiz: The Robot Collaborator”

Here’s a fascinating Comment of the Day by John Paul, explaining his own experiences with ChatGpt relating to yesterday’s post, “’Ick or Ethics’ Ethics Quiz: The Robot Collaborator”:

***

Well if its a competition, and against the rules, I think its pretty easy to say yes its unethical.

However, to help out with just some simple problems, I see using an AI program as no different than asking an editor to go over your book. As someone who has messed around with AI on this particular level (mostly for help with grammar and syntax issues), I have concluded that its contributions are dubious at best, at least as far as the technology has advanced so far.

Consider the following: Here are two paragraphs I wrote for my book last night:

“Kesi stared at the back of the door for a long time. At some point, she lifted her hand to gingerly touch the spot that was starting to numb across her check. Its bite stung upon contact with her sweaty fingers and she reflexively drew it away, just to carefully guide it back again. For a brief moment she played this game of back and forth much like the younglings who would kick the ball in the yard, until she finally felt comfortable with feeling of leaving her hand to rest upon her face. When it finally found its place, the realization of what had just happened hit her just as quickly and suddenly as if Eliza slapped her.”

“Not once, not twice, but Eliza slapped her three times with enough force to send tears down her face. In the moment she might have been too confused to see what was going, but now she was forced to grapple with the weight of the truth that was settling in her chest. (Yes, I realize this isn’t the greatest prose, but it was 2am and I was tired).”

Here’s what ChatGPT suggested I do with those sections when correcting for issues:

Continue reading

What A Surprise! Unethical Ex-Trump Lawyer Michael Cohen Has An Unethical Lawyer

I guess that should be “another unethical lawyer,” since Trump’s disbarred fixer was previously represented by Lanny Davis, who previously spun for the Clintons.

This, however, is funny: Cohen’s current lawyer, in arguing to a judge that court supervision of his client should be terminated now thatCohen is out of prison, included three imaginary cases in his filing last month.

“As far as the court can tell,” Manhattan federal judge Jesse M. Furman, wrote yesterday, “none of these cases exist.”

Given that Cohen is Cohen and among the most unethical people with a law degree in the country, suspicion immediately was sparked that he was behind his lawyer’s fantasies. But this is the era of nascent SkyNet, and unwitting lawyers and paralegals have already been caught using chatbots for legal research, to their sorrow. Last June, for example, a federal judge fined two lawyers $5,000 for putting their names on a legal brief containing made-up cases and citations concocted by aspiring lawyer ChatGPT. The fines were widely derided as insufficient, but judges traditionally are sympathetic when lawyers misuse technology that the judges don’t understand….at least the first time around.

So maybe Cohen’s lawyer was fooled by a bot. Another possibility is that Cohen’s lawyer, Cohen-like, just cheated. I have been told by many litigators over the years that they routinely find fake cases in their adversaries’ briefs, memos and motions.

Furman has ordered Cohen’s attorney to provide copies of the three mystery decisions within a week, or provide a sworn declaration explaining “how the motion came to cite cases that do not exist and what role, if any, Mr. Cohen played in drafting or reviewing the motion before it was filed.”

Given the client, this story is as perfect a candidate for a Nelson as I could imagine.