Just as I was preparing yesterday for today’s 3-hour legal ethics CLE seminar (which, coincidentally, contained a section about the unsettled status of lawyers using artificial intelligence for legal research, writing and other tasks in the practice of law), I received this unsolicited promotion in my email:
Let’s see: how many ways does this offer a lawyer the opportunity to violate the ethics rules? Unless a lawyer thoroughly understands how such AI creatures work—and a lawyer relying on them must—it is incompetent to “try” them on any actual cases. Without considerable testing and research, no lawyer could possibly know whether this thing is trustworthy. The lawyer needs to get informed consent from any client whose matters are being touched by “CoCounsel,” and no client is equipped to give such consent. If it were used on an actual case, there are questions of whether the lawyer would be aiding the unauthorized practice of law. How would the bot’s work be billed? How would a lawyer know that client confidences wouldn’t be promptly added to CoCounsel’s data base?
Entrusting an artificial intelligence-imbued assistant introduced this way with the matters of actual clients is like handing over case files to someone who just walked off the street claiming, “I’m a legal whiz!” without evidence of a legal education, a degree, or work experience.
On the plus side, the invitation was a great way to introduce my section today about the legal ethics perils of artificial intelligence technology.


Aside from the specifically legal ethics concerns, AI is not yet ready to do anything like comprehensive research. I have a friend who was preparing a topics course for this fall. She asked her FB friends for suggestions for plays to include in the course. On a whim, she also plugged her needs into an AI platform, which dutifully identified five plays. Four don’t exist; the fifth doesn’t meet her specifications. Selling a product like this to lawyers is pretty stupid. So is purchasing it.
Not only stupid: if lawyers are involved in the marketing ploy, they may be violating rule 8.4, which makes it unethical for a lawyer to induce another lawyer into breaking the rules.
HEY LOOK! The latest Curmie’s Conjectures is up, everybody!
I tried AI (ChatGBT) on a whim to see what it would give me if I asked it to draft a motion for summary judgment in a contract dispute. The bare bones motion form was not awful but it was just that – a bare bones form. I have no idea if the cases cited in the form were accurate or not because I abandoned the idea and did my own stuff.
jvb
“contained a section about the unsettled status of lawyers using artificial intelligence for legal recherche”
I’m going to choose to believe that as opposed to misspelling research, you missed the word, and wanted to say “recherche research”.
On the topic though, did you see the case involving the guy that used AI to prepare a filing, and the judge just absolutely eviscerated him in real time at the disciplinary hearing?
No, I just missed the typo, as usual, although I did write it immediately after a 2 hour drive in bad traffic from Richmond to Alexandria, a three hour seminar and a sleepless night before that, coming after a post midnight arrival at a hotel after a two hour drive in bad traffic from Alexandria to Richmond.
Well of COURSE I saw that: it was part of my presentation!
Reading the transcript was a treat. You should consider having someone on hand to roleplay that out, you could be the judge (who was having a great time yelling at an idiot) and get some poor unfortunate to read out the lame answers from counsel.