By Now, No Lawyer Should Be Excused For Making This Blunder

Yesterday, Judge Kelly Rankin of the District of Wyoming issued an order to show cause in Wadsworth v. Walmart Inc. He noted that in a motion to the court, the plaintiffs counsel had cited nine cases:

1. Wyoming v. U.S. Department of Energy, 2006 WL 3801910 (D. Wyo. 2006);

2. Holland v. Keller, 2018 WL 2446162 (D. Wyo. 2018);

3. United States v. Hargrove, 2019 WL 2516279 (D. Wyo. 2019);

4. Meyer v. City of Cheyenne, 2017 WL 3461055 (D. Wyo. 2017);

5. U.S. v. Caraway, 534 F.3d 1290 (10th Cir. 2008);

6. Benson v. State of Wyoming, 2010 WL 4683851 (D. Wyo. 2010);

7. Smith v. United States, 2011 WL 2160468 (D. Wyo. 2011);

8. Woods v. BNSF Railway Co., 2016 WL 165971 (D. Wyo. 2016); and

9. Fitzgerald v. City of New York, 2018 WL 3037217 (S.D.N.Y. 2018).

The judge then stated that none of the cases exist except United States v. Caraway. The others were figments of ChatGPT’s vivid imagination.

A year ago, when artificial intelligence was just beginning to find its way into law practice, the initial incidents of lawyers filing fake cases with courts received much attention and mockery. The two most publicized cases involved lazy associates and neglectful partners, who are obligated to check on the product of new technology. Another instance of an AI bot providing fake cases was the handiwork of former Trump fixer and disbarred lawyer Michael Cohen, an idiot. But these episodes were well publicized; I used them in my seminars all year. It seemed to me, at least by my last one in 2024, that every lawyer had read about these cautionary tales.

There have been at least three: Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 3696209 (S.D.N.Y. May 4, 2023); United States v. Hayes, No. 2:24-CR-0280-DJC, 2024 WL 5125812 (E.D. Cal. Dec. 16, 2024); and United States v. Cohen, No. 18-CR-602 (JMF), 2023 WL 8635521 (S.D.N.Y. Dec. 12, 2023) Judges have responded by ordering the filing attorneys to show cause why sanctions or discipline should not issue, and “Ooopsie!” has not sufficed as such cause. This time, the judge has ordered that at least one of the three attorneys involved provide “a true and accurate copy of the mystery cases by February 10, 2025, which of course they cannot do. If they can’t provide the cases, the order says, the lawyers “shall separately show cause in writing why he or she should not be sanctioned pursuant to Fed. R. Civ. P. 11(b), (c); (2) 28 U.S.C. § 1927; and the inherent power of the Court to order sanctions for citing non-existent cases to the Court.”

This written submission is due on February 13 and “shall take the form of a sworn declaration” that contains “a thorough explanation for how the motion and fake cases were generated,” as well as an explanation from each lawyer of “their role in drafting or supervising the motion.”

The lawyers, aka. Larry, Moe and Curley, are Rudwin Ayala, Taly Goody, and Timothy Michael Morgan. Taly Goody is a small firm lawyer, and small firms have a bit more of an excuse (but not much) for botches like this. Rudwin Ayala and Michael Morgan, however work at Morgan and Morgan, which describes itself in its ubiquitous TV ads and on its website as “America’s largest injury law firm.” It is the 42nd largest firm in the country according to The American Lawyer.

There is no excuse for a firm of that size and magnitude making this mistake. I expect the sanctions to be swift and terrible. I have often pointed out to my ethics seminars that judges tend to be relatively lenient with lawyers who make mistakes using new technology, as in the case of misdirected email when it first became widely used. This is because the judges are often the last to understand the implications of technology, and are frequently the last to use it themselves. However, nobody in the legal field who was paying attention could have possibly missed those earlier cases and their import, and the Rules of Professional Conduct demand that lawyers do pay attention, or else.

9 thoughts on “By Now, No Lawyer Should Be Excused For Making This Blunder

  1. I’ve been retired for twenty-five years, but how on earth could anyone rely on a computer to do your work? I was still doing research with West books, never mind Westlaw. Folly of this magnitude is unimaginable. How is it even something you need to discuss in seminars?

    • I would think that search engines already exist for case law? Why rely on AI to do your work when searching a data base for cases exists already. There must be some kind of law data base for searching available. Even if you use AI you would still have to double check the results; so, you might as well just search the case law yourself.

      Looks like there is FindLaw and others to assist in these matters.

      • Sure. A search engine gets you to the cases, just as CJS does (did?), but you have to go to the cases and read the cases and determine what they actually say. Rely on a computer (an amalgamation of algorithms?) to provide authority for a proposition it’s making for you? That’s nuts.

  2. IANAL, but at least reading your citations be considered basic due diligence? Who am I kidding, of course it is. If I had cited nonexistent material when giving training I would be lucky to be laughed at.

  3. Is there some system that is actually based on retrieving court documents and case histories? Like a specialized database that lawyers access just for this type of work?

    Maybe next time I need to fix a car I’ll use AI to look up possible causes of the symptoms and then use a 3D printer to fabricate make believe parts to install…

    • Of course there are plenty of them, many free. Checking cases is ridiculously easy in most cases, and AI isn’t necessary, much less reliable. Even on Google, which shows its bot’s response to an inquiry as well as its regular search results, I often find contradictory answers, with the AI being the wrong one.

  4. I use AI extensively in my practice . It is tremendously effective in providing an overview of case law and concepts for a particular issue. When provided with enough factual detail, it can frequently solve problems and provide key references at a huge savings in time and money for the client. But I use an AI that exclusively runs on a legal research platform- Lexis. Every case citation comes as a link to the actual verbiage and I can check the case with the click of a mouse button. And I do, for every single citation, because I find that as good as the AI algorithm is, it still misinterprets the occasional reference. Why anyone would use ChatGPT in Google, or some other non-legal database, for citation purposes without fly-specking those citations is beyond me. I’m wondering if these firms are increasing their profitability by using paralegals to write their memos and not making the effort to check the work product carefully.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.