Independent Legal Ethics Journalism
April 4, 2026

The Hallucination Crisis: How AI-Generated Fake Citations Are Devastating Legal Careers Across America

The Hallucination Crisis: How AI-Generated Fake Citations Are Devastating Legal Careers Across America

Quick Facts

  • Scope: Over 1,200 documented cases of AI-generated hallucinations in court filings worldwide, approximately 800 in U.S. courts
  • Trend: Cases accelerating rapidly in early 2026, with 10 separate courts flagging incidents in a single day
  • Record Sanction: $109,700 ordered against an Oregon attorney in March 2026 — believed to be the highest AI-related penalty in U.S. history
  • Key Cases: Phoenix Suns discrimination lawsuit (Arizona); City of New Orleans Law Department resignations (Louisiana); Sixth Circuit sanctions (Tennessee)
  • Source: Damien Charlotin, HEC Paris, maintains the worldwide tracker of AI hallucination incidents in courts

On the last day of March 2026, U.S. District Judge Murray Snow in Arizona issued a ruling that crystallized a crisis quietly consuming the American legal profession. Attorney Sheree Wright, representing a former Phoenix Suns employee in a sexual harassment and retaliation lawsuit against the NBA franchise, had filed three separate motions containing at least eighteen fabricated legal citations — cases that never existed, quotations attributed to judges who never said them. The source was not a rogue associate or a careless paralegal making typographical errors in case numbers. It was artificial intelligence.

Judge Snow called Wright’s explanation of how the fake citations ended up in official court filings “a convoluted tale.” Wright blamed a staff member who, she said, had substituted her own AI-generated draft while Wright was on bereavement leave. When pressed on the nature of the bereavement, Wright told reporters it was for her dog. The judge was unpersuaded. He ordered Wright to pay a portion of the Phoenix Suns’ legal costs, mandated additional ethics training on AI use, and forwarded a copy of his ruling to the Arizona State Bar and every district and magistrate judge in the state.

“We have a duty to take full responsibility and I do so here,” Wright told Arizona’s Family in an interview. “Since this occurred, we have taken meaningful steps to ensure that this does not happen again.” She said AI use is now barred at her firm and that she self-reported to the state bar.

Her case is not an outlier. It is a symptom.

The Numbers Tell a Staggering Story

Damien Charlotin, a researcher at HEC Paris, the elite French business school, has been quietly maintaining what has become the definitive global database of AI hallucination incidents in legal proceedings. His tracker now documents more than 1,200 cases worldwide in which courts have flagged, sanctioned, or otherwise addressed the submission of AI-fabricated legal citations. Approximately 800 of those are from American courts.

“Recently we had 10 cases from 10 different courts on a single day,” Charlotin told NPR. “We have this issue because AI is just too good — but not perfect.”

The rate is still accelerating. On a single day — March 31, 2026 — Charlotin’s database shows incidents in Arizona, Nevada, Minnesota, Washington State, Ohio, New York, Iowa, and Arkansas. Some involved licensed attorneys. Many involved pro se litigants using ChatGPT, Google’s Gemini, or Perplexity AI as ersatz legal research tools. The common thread: none of them verified whether the cases they cited actually existed.

The penalties are escalating in tandem. In March 2026, a federal court in Oregon ordered an attorney to pay $109,700 in sanctions and costs for filing AI-generated errors — believed to be the highest penalty of its kind in American legal history. That figure dwarfs the $3,000-per-attorney fines levied in 2025 against the lawyers for MyPillow CEO Mike Lindell, which at the time seemed severe.

Two City Attorneys Walk Out the Door in New Orleans

While the Phoenix Suns case made headlines in sports media, a quieter and arguably more troubling drama was unfolding in Louisiana. Two attorneys in the City of New Orleans Law Department resigned after U.S. District Judge Carl Barbier discovered that a motion filed in January 2026 — in a civil rights lawsuit against the city, former Mayor LaToya Cantrell, the New Orleans Police Department, and several officers — contained nine completely fabricated case citations generated by ChatGPT.

Assistant City Attorney Jalen Harris admitted during a March 19, 2026 hearing that he had first searched Westlaw, the traditional legal research database, but then turned to ChatGPT “to speed up the work.” He told the court he did not check whether the AI-generated cases were real. He did not read them. He simply added them to the brief.

His supervisor, Deputy City Attorney James Roquemore — a lawyer with thirty years of experience — reviewed the motion before it was filed. He did not question the unusual formatting of the fake citations, which appeared as bullet points rather than in standard legal citation format. Judge Barbier noted that Roquemore bore greater responsibility precisely because of his experience and supervisory role.

Harris was fined $250. Roquemore was fined $1,000. Chief Deputy City Attorney Corwin St. Raymond received a formal warning. Both Harris and Roquemore subsequently tendered their resignations. The newly appointed City Attorney, Charline Gipson — who had taken office only three days before the tainted motion was filed — implemented a department-wide AI policy effective March 27, 2026, requiring disclosure of any previous AI use in work product and annual compliance certification.

“The City Attorney’s Office takes every step to ensure accuracy in directing and supervising the legal affairs of the City of New Orleans,” Gipson said in an official statement. The policy “recognizes the benefits of innovation and efficiency offered by AI, which is rapidly transforming the legal field, while safeguarding against the potential harms and misuse or abuse of AI.”

The Sixth Circuit Weighs In

The crisis has now reached the federal appellate level. In early April 2026, the United States Court of Appeals for the Sixth Circuit sanctioned attorneys who submitted AI-hallucinated citations in United States v. Farris, a case out of the circuit that covers Tennessee, Kentucky, Ohio, and Michigan. The court disqualified counsel with no compensation for time served, locked the offending briefs, referred the matter to the state bar, and issued a formal Notice of Opinion.

The tool used in that case was not ChatGPT but Westlaw’s own AI-powered CoCounsel — a product specifically marketed to lawyers as a safer alternative to general-purpose AI. The fabrication included false quotations attributed to real cases, a particularly insidious form of hallucination because it makes verification more difficult. A lawyer checking whether the cited case exists would find it, but the quoted language would be entirely invented.

Why This Keeps Happening

The conventional explanation — that lawyers are lazy or technologically illiterate — does not adequately account for the breadth of the problem. Sean Harrington, the director of Arizona State University’s AI and Legal Tech Studio, told reporters that there have been more than 12,000 known cases of AI hallucinations in court filings. “And those are only the ones known,” he emphasized.

The fundamental issue is structural. Large language models — the technology underlying ChatGPT, Gemini, Claude, CoCounsel, and every other generative AI tool — do not retrieve information. They generate text that is statistically likely to follow from the input they receive. When asked for a case supporting a legal proposition, these systems construct what a plausible case citation looks like: a realistic case name, a plausible court, a convincing date, even quotations that sound like judicial prose. They are, in the most literal sense, making it up — but doing so with such fluency that even experienced attorneys fail to spot the fabrication.

“I am surprised that people are still doing this when it’s been in the news,” said Carla Wale, associate dean of information and technology and director of the Gallagher Law Library at the University of Washington School of Law. She is designing specialized AI ethics training for law students, but she acknowledges that the ethical rules governing AI use remain unsettled.

“I don’t think there is a consensus beyond, ‘You have to make sure it’s correct,’” Wale said. “And so for us, that is the baseline.”

The Rules Haven’t Caught Up

Under the existing Model Rules of Professional Conduct, lawyers are already required to ensure the accuracy of their filings under Rule 11 and to maintain competence under Rule 1.1. In theory, these rules should be sufficient: if you cite a case, you must verify it exists, regardless of how you found it.

In practice, the sheer speed and volume that AI enables has outstripped the profession’s verification habits. Joe Patrice, senior editor at Above the Law, has been tracking how AI tools are being “forced” into virtually all legal software.

“It’s going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, ‘Hey, this is AI assisted,’ at which point it kind of becomes a useless endeavor,” Patrice told NPR.

Some courts have moved independently. Kenosha County Circuit Court in Wisconsin now requires attorneys to label anything produced with AI, including specific details about which tools were used. The goal is both transparency and deterrence — making it easier to know which filings to double-check.

But Patrice is skeptical of labeling requirements as a long-term solution. His deeper concern is what comes next: “agentic” AI systems that promise to handle legal tasks from start to finish, removing the human from the middle steps entirely.

“I think once you obscure those middle steps, that’s where mistakes happen,” he said. “And even people who are well-meaning and not lazy will lose things because they weren’t involved in that process.”

The Economic Pressure

Behind the discipline cases lies an economic logic that the legal profession has been reluctant to confront. AI tools can compress hours of legal research into minutes. For law firms that bill by the hour, this creates a paradox: the tool that makes you more efficient also threatens your revenue model.

“There are two options,” Patrice said. “The lawyers can agree to take less — pause for laughter — or they can start finding a new way to bill.”

He predicts a shift toward item-based billing, which would ratchet up time pressure on attorneys and make it even more tempting to accept the first draft of whatever AI produces without adequate review. “Future generations who grow up in a world where this is always a reality, do they know to stop and think the problem through?” he asked. “And that’s a worry.”

When AI Itself Becomes the Defendant

In a twist that underscores just how far the problem has metastasized, OpenAI itself — the maker of ChatGPT — was sued in March 2026 by Nippon Life Insurance Company of America in federal court in Illinois. The insurance company alleges it was the target of frivolous legal actions by an individual who was receiving legal advice directly from ChatGPT. Among the claims: that OpenAI is engaging in the unauthorized practice of law.

OpenAI told NPR the complaint “lacks any merit whatsoever.”

But the lawsuit raises a question the legal profession will eventually have to answer: When AI tools produce output that reads like legal advice, looks like legal research, and is being used as legal research — at what point does the developer bear some responsibility for the consequences?

The Human Cost

It is easy to read these cases as stories about technological incompetence, but the human dimensions are worth pausing over. Jalen Harris, the New Orleans assistant city attorney, had been practicing since 2024 — barely two years. He was a young lawyer in a resource-strapped municipal law department, trying to keep up with a heavy caseload. When Westlaw didn’t give him what he needed quickly enough, he turned to the tool that everyone was talking about. He didn’t verify the output. The error cost him his career.

Sheree Wright, the Phoenix attorney, was dealing with the death of a loved one — and trying to keep a complex employment discrimination case moving forward simultaneously. The staff member she trusted submitted an AI-generated draft without her knowledge. The responsibility was still hers. It always is.

Greg Lake, the Omaha attorney hauled before the Nebraska Supreme Court in February 2026, denied using AI at all. He told the justices he had mistakenly uploaded a working draft from a malfunctioning computer. The court was not convinced and referred him for discipline. A similarly uncomfortable scene played out before the Georgia Supreme Court in March.

These are not stories about bad lawyers, necessarily. They are stories about a profession being transformed faster than its members — or its regulators — can adapt.

What Comes Next

Carla Wale, the University of Washington law librarian, rejects the most apocalyptic predictions about AI replacing human lawyers. But she frames the future in Darwinian terms.

“I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don’t,” she said. “That’s what I think the future is.”

In the meantime, the database keeps growing. Ten cases in ten courts on a single day. $109,700 in sanctions for a single lawyer. Two careers ended in a New Orleans municipal office. Eighteen phantom cases cited in a lawsuit against an NBA franchise. A legal research tool marketed as safe producing fabricated quotations that fooled even experienced appellate lawyers.

The hallucination crisis is not slowing down. If anything, as AI tools become more sophisticated and more deeply embedded in legal practice, the potential for catastrophic error grows in proportion. The question is no longer whether the legal profession can avoid this problem. It is whether the profession can build guardrails fast enough to survive it.


Sources and Citations

  • Charlotin, D. (2026). AI Hallucinations in Court Proceedings: Worldwide Tracker. damiencharlotin.com/hallucinations
  • Chelsea Montes v. Suns Legacy Partners LLC, No. 2:24-cv-01234 (D. Ariz. Mar. 31, 2026)
  • Daniel Gentry v. City of New Orleans, et al., Order on Sanctions (E.D. La. Mar. 20, 2026)
  • United States v. Farris (6th Cir. Apr. 3, 2026)
  • Kaste, M. (Apr. 3, 2026). “Penalties Stack Up as AI Spreads Through the Legal System.” NPR
  • KTVK/KPHO (Apr. 2, 2026). “Attorney Disciplined for AI-Generated Fake Cases in Phoenix Suns Lawsuit.” AZFamily.com
  • KJZZ (Apr. 1, 2026). “Lawyers for Ex-Phoenix Suns Employee Suing Team Face Penalties.” KJZZ.org
  • WDSU (Apr. 2, 2026). “New Orleans Attorneys Resign Amid Improper AI Use Investigation.” WDSU.com
  • ABA Model Rules of Professional Conduct, Rules 1.1, 3.3, 8.4, 11
  • National Law Review (Mar. 2026). “Sixth Circuit Sanctions Attorneys for Fake Citations.” natlawreview.com