A.I. Ethics Update: Nothing Has Changed!

Oh, there have been lots more incidents and scandals involving artificial intelligence bots doing crazy things, or going rogue, or making fools of people who relied on them. But the ethics hasn’t changed. It’s still the ethics that should be applied to all new and shiny technology, but never is.

We don’t yet understand this technology. We cannot trust it, and we need to go slow, be careful, be patient. We won’t. We never do.

Above is a result someone got and posted after asking Google’s Gemini AI the ridiculous question, “Are there snakes at thesis defenses?” The fact that generative artificial intelligence ever goes bats and makes up stuff like that is sufficient reason not to trust it, any more than you would trust an employee who said or wrote something like that when he wasn’t kidding around. Or a child.

The D.C. Bar became the second major bar association to issue guidance regarding lawyer using A.I. in the care and feeding of clients with Legal Ethics Opinion 388. Essentially, the long opinion concludes that a lawyer has to supervise, as in “oversee and check everything,” a bots work or the lawyer is breaching an ethical duty. It is the same obligation a lawyer has when supervising a non-lawyer assistant or paralegal; any error made by the assistant (or bot) is the lawyer’s responsibility, and if either of those agents, human or mechanical, cheats or does something unethical, the lawyer is the one who will get sanctioned.

After a long, long review of the state of the technology in the law, 388 concludes,

We anticipate that GAI eventually will be a boon to the practice of law. Moreover, lawyers who use generative artificial intelligence do not need to be computer programmers who can write AI programs or critique AI code written by others. But they do need to understand enough about how GAI works, what it does, and its risks and limitations to become comfortable that the GAI will be helpful and accurate for the task at hand, and that it will not breach client confidentiality. Lawyers should also be mindful of the implications GAI creates for their duties of supervision; their duty of candor to the tribunal and their fairness obligations to opposing parties and counsel; the reasonableness of their fees; and their obligations with respect to the client file.

In other words, A.I right now is more of a threat and a risk than a boon to the practice of law, and if it is going to eventually be a boon, lawyers need to exercise prudence and care, as well a take the time to understand the technology. They won’t.

Michael Crichton died before he could write a scary novel about AI, and it would have been a great one. Crichton’s theme in many of his books and films was that the same pattern in human “progress” keeps repeating: humans discover new technology and leap into all kind of applications without considering the possible consequences. One of the novelist’s characters who spoke for him, the “chaotician” Ian Malcolm in “Jurassic Park,” insisted that only moral luck had stopped humanity from destroying itself already with this deadly habit. Artificial Intelligence gives us another chance to get unlucky.

16 thoughts on “A.I. Ethics Update: Nothing Has Changed!

  1. One of the most dangerous AI driven technologies at the moment is Tesla FSD (Supervised). They market it as Full Self Driving and people have been using it in this manner rather than “supervised”. There are already several fatalities while this FSD was in use. Several cases of the cars suddenly veering of the road or into the other lane have been documented. Tesla has settled at least two cases out of court. At best, Tesla’s FSD (Supervised) is a crappy Level 2 autonomy.

    Elon Musk has been promising full autonomy within a year since 2014/2015 time frame. Musk had Tesla remove radar and LiDAR to rely only on the low grade cameras. Elon’s con game is headed for a dead end.

    “First dead end? AI trained on images. They discovered what everyone knew, that the more a big neural network ingested the less it improved. It made catastrophic mistakes and people died.” — Davi Ottenheimer

  2. I’ve said it before, and I’ll say it again. There is no intelligence in AI. There is no mind, no discerning, no comprehension, no ability to differentiate between real and unreal. AI fundamentally is an algorithm that crunches 0’s and 1’s. The algorithm has become increasingly sophisticated over time, and some of the things it can do is neat. But any sort of machine learning algorithm that can always spit out the correct answer is crippled in the limited scope it can be applied to. And the machine learning algorithms that are more generally applicable can be provably shown to fail arbitrarily poorly. Yes, they might give a good answer a great deal of the time, but those instances in which it does fail will be significant enough to ensure that AI should never, ever be trusted.

    We are so preoccupied with whether or not we could that we haven’t stopped to think if we should.

  3. “The fact that generative artificial intelligence ever goes bats and makes up stuff like that is sufficient reason not to trust it, any more than you would trust an employee who said or wrote something like that when he wasn’t kidding around. Or a child.”

    This is commonly called “creativity”.

    Generative AI relying on LLMs will fail at “truth” telling simply because of how language works and is encoded. If “the” is encoded as some kind of token, then “the-re”, “the-ir”,”the-ocracy”, “the-inene” etc. will have some numerically based mis-association based on the assumption that “the” occurs frequently. Even if gen ai were to get 99.99999% of an answer correct, we are still wanting it to produce for us an ordered list of word fragments most commonly occurring together.

    My 7 year old daughter: “Mama, that woman, part of her dress is ripped off”

  4. Wait… are you saying that most places don’t make you fight snakes at your dissertation defense? I musta gone to the wrong school.

    • It does give me a better perspective at what my brother went through to get his PhD.

      To be fair, though, he was in fisheries and fresh water oceanography (limnology), so I am sure he was accustomed to dealing with snakes even prior to defending his dissertation.

  5. So, approaching this from a position of utter bafflement, I did a “normal” Google search, and a human being posted this answer circa 2010-2012:

    *Advice: the ‘snake fight’ portion of your thesis defense | Dynamic Ecology (wordpress.com)

    *FAQ: The “Snake Fight” Portion of Your Thesis Defense – McSweeney’s Internet Tendency (mcsweeneys.net)

    So, it seems Google’s AI did not “hallucinate” this, but rather treated a satirical source as serious. Even ordinary Google provides a snippet apparently treating it as a serious answer. There is also apparently an on-going inside internet joke about this, where a lot of different authors, apparently all human, have commented about the snake portion.

    I’m not so sure if this a failure of AI, or just more buyer-beware when it comes to the internet. If one were so naive as to believe there are snakes involved in a thesis defense, that seems more a life-competence issue than a failure of technology.

    • The point is, though, AI needs to be treated exactly as a human who behaved the same way. A human who can’t distinguish satire from reality is an idiot, and untrustworthy.

      • More importantly is that if we can manipulate AI by having satire on the web, satire or even malicious fake cases can be uploaded to the internet in order to make the AI fail.

        AI run on computers. Computers get hacked. The more we start to rely on AI, the more profitable it will be to hack AI.

    • These large language model ‘AI’s are just statistical models. They try to decide what the most likely answer is, not the right answer. If you ask it for a specific value (the value of Ford stock for example), you are likely to get an average instead (the Fortune 500 average, for example). If you ask about snakes in a thesis defense, it will find all examples of snakes in thesis defenses and include the aspects that are the most common in the accounts. Can someone ask it about the dangers of dihydrogen monoxide, please?

Leave a reply to Edward Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.