Independent Legal Ethics Journalism
April 10, 2026

The $145,000 Paradox: Courts Punish Lawyers for Using AI While 61% of Federal Judges Use It Themselves

The $145,000 Paradox: Courts Punish Lawyers for Using AI While 61% of Federal Judges Use It Themselves

Quick Facts

  • Q1 2026 Sanctions Total: At least $145,000 imposed by U.S. courts for AI-fabricated citations in the first three months of 2026 alone
  • The Surge: January: $5,000 in sanctions. February: $250. March: exploded to over $139,000 — the Sixth Circuit record, the Oregon record, and multiple state-level penalties all arrived within weeks
  • Oregon Record: $109,700 in combined sanctions and adverse costs against a single attorney for AI-hallucinated filings — believed to be the largest aggregate penalty tied to one attorney's AI misconduct in U.S. history
  • Sixth Circuit Record: $30,000 fine against two Tennessee attorneys — the largest federal appellate sanction yet linked to AI-fabricated citations; court additionally ordered opposing counsel's fees and double costs
  • Nebraska: The Nebraska Council for Discipline recommended temporary suspension of Omaha attorney Greg Lake on April 9, 2026, after his Supreme Court brief contained 57 errors out of 63 citations — fictitious cases, misquoted statutes, invented authorities
  • The Paradox: A Northwestern University survey published in the Sedona Conference Journal found 61.6% of federal judges use AI tools in their judicial work — the same courts that are sanctioning lawyers for doing the same thing
  • Global Scope: Researcher Damien Charlotin (HEC Paris) tracks 1,200+ AI hallucination cases in legal proceedings worldwide; approximately 800 from U.S. courts; ten courts flagged AI-fabricated filings on a single day in early 2026
  • Sources: NPR (Apr. 3, 2026); ComplexDiscovery (Apr. 9, 2026); WOWT/Nebraska (Apr. 9, 2026); Sedona Conference Journal Northwestern Survey (2026); Damien Charlotin, HEC Paris Smart Law Hub

The numbers tell a story the legal establishment does not want told.

In January 2026, U.S. courts imposed $5,000 in sanctions against attorneys for AI-related filing errors. In February, the total fell to $250 — barely a rounding error. Then March arrived, and with it a judicial reckoning so disproportionate that researchers tracking AI sanctions globally called the pace "relentless": the Sixth Circuit issued a $30,000 sanction, the first substantial federal appellate penalty linked to AI-fabricated citations. A federal court in Oregon imposed $109,700 in combined penalties against a single attorney. Multiple other state and federal courts piled on within the same weeks. Total Q1 2026 AI sanctions: at least $145,000.

That is not an enforcement trend. That is a message.

The message is being sent by courts — by the same institution whose judges, according to a Northwestern University survey published in the Sedona Conference Journal, use AI tools in their judicial work at a rate of 61.6%. More than six in ten federal judges use AI. Fewer than half have received any training from their courts on how to use it. Twenty-two percent use it daily or weekly.

The legal establishment is not opposed to AI. It is opposed to lawyers using AI. There is a difference, and it is worth understanding precisely.

The Anatomy of the Surge: How March Became a Sanctions Apocalypse

To understand why Q1 2026 represents something qualitatively different from the AI sanctions of prior years, it helps to understand Oregon's role in building the machinery for this moment.

In December 2025, the Oregon Court of Appeals established what amounted to a tariff schedule for AI hallucination misconduct. In sanctioning Portland attorney Gabriel A. Watson $2,000 for AI-fabricated citations, the court set a per-infraction rate: $500 per fabricated citation, $1,000 per fabricated quotation. The formula was explicit. It was repeatable. Federal courts in Oregon adopted it.

The result: when U.S. Magistrate Judge Mark Clarke in the District of Oregon encountered a case in early 2026 involving a family dispute over a vineyard and winery — in which 15 fabricated citations and 8 invented quotations had been filed across multiple briefs — the math was straightforward and brutal. Apply the per-item rate. Add adverse costs. Issue the order. The total for the lead attorney exceeded $15,000 on that case alone, with the aggregate against the attorney for all related AI-misconduct reaching $109,700.

Clarke wrote, in a lengthy opinion that has circulated widely in legal circles since: "In the quickly expanding universe of cases involving sanctions for the misuse of artificial intelligence, this case is a notorious outlier in both degree and volume."

Outlier for now. As Oregon's per-item formula spreads to other jurisdictions — and it is spreading — the outlier will become the template.

The Sixth Circuit arrived at its landmark sanction via a different path. In a Tennessee fireworks case, two attorneys filed briefs containing more than two dozen citations that were wrong, misleading, or nonexistent. The court found no evidence of deliberate fabrication — the errors bore the unmistakable signature of AI hallucination, not intentional fraud. It did not matter. The court levied $15,000 against each attorney, ordered them to pay opposing counsel's fees on the appeal, and imposed double costs. The opinion was explicit: it is the lawyer's duty, not the AI's, to read and verify every authority before filing. The technology is the tool. The professional is responsible for how the tool is used.

That principle — uncontroversial on its face — is where the legal establishment's enforcement campaign is most vulnerable to scrutiny. Because if the principle is correct, it applies to everyone using AI in legal work. Including the people administering the sanctions.

61.6%: The Number the Courts Don't Mention in Their Sanction Orders

The Northwestern University study, published in the Sedona Conference Journal in early 2026, surveyed a random sample of 502 federal judges and received 112 responses — a response rate meaningful enough to produce reliable statistics. Its headline finding has not made its way into any of the sanction orders being issued against attorneys for AI use: 61.6% of responding federal judges report using AI tools in their judicial work.

The most common applications: legal research and document review. The same categories that get attorneys sanctioned when the AI makes a mistake.

The study found that daily or weekly AI use among judges was lower — 22.4% — suggesting that the majority of the 61.6% are using AI on an occasional or ad-hoc basis. It also found that 45.5% of responding judges said their courts had provided no AI training whatsoever. Judges using AI tools, without court-provided training, for legal research and document review, in cases where they will later issue sanctions against attorneys for doing the same thing.

Damien Charlotin, the researcher at HEC Paris's Smart Law Hub who maintains the world's most comprehensive database of AI hallucination cases in legal proceedings, put the broader scope in perspective in an April 3 NPR interview: he now tracks more than 1,200 such cases globally, with approximately 800 from U.S. courts. The pace has reached a point where he described "ten cases from ten different courts on a single day." He added, with a clarity that cuts through the legal establishment's framing of the issue: "We have this issue because AI is just too good — but not perfect."

Just too good. Not a rogue technology. Not an incompetent tool. A technology that is overwhelmingly useful, occasionally wrong, and being treated by a powerful institution as a professional sin — when the members of that institution are using the same tools for the same purposes in their own work.

Nebraska: The Anatomy of a Career in Jeopardy

The Nebraska Council for Discipline's April 9, 2026 recommendation to temporarily suspend Omaha attorney Greg Lake reads like a case study in how the AI enforcement machinery grinds up individual careers in its pursuit of institutional messaging.

Lake argued a divorce case before the Nebraska Supreme Court in February 2026. His brief contained 63 references. Of those 63, the opposing attorney identified 57 that contained some form of problem: misquoted cases, fictitious cases, misquoted statutes. The justices grilled Lake on the record. One asked directly: "The elephant in the room is whether or not you used artificial intelligence. Did you?" Lake said no. He told the court he had filed a draft by accident — that his computer had broken on his 10th wedding anniversary, that he'd uploaded the wrong version of the brief.

The Nebraska Supreme Court dismissed the appeal, writing that Lake's explanation "lacks credibility." It referred him to the Council for Discipline. The Council has now recommended temporary suspension. Lake has a week to respond before the court issues its final ruling.

Lake's client in the divorce case — Jason Regan, who is fighting in court to remain in his daughter's life — has received a bill from opposing counsel for $17,000, due within days, with another $35,000 owed for his ex-wife's legal fees. Regan told WOWT he is "exhausted and frustrated with the legal system" and isn't sure he has the finances to pursue a malpractice claim against his own attorney.

Set aside the question of whether Lake used AI and lied about it, or genuinely filed the wrong draft. Either version of the story implicates the same institutional reality: the court system's response to AI-related filing errors has been designed to maximize the punitive impact on the individual practitioner, with consequences that radiate outward to clients who had nothing to do with the technology choice and everything to do with trusting the legal system to serve them.

Regan is not a cautionary tale about AI. He is a casualty of the war the legal establishment is waging against AI adoption — a war whose human costs are falling on clients, not on the institutions conducting it.

Georgia: The Supreme Court as Theater of Professional Humiliation

Nebraska was not the only state supreme court to stage a public AI inquisition in early 2026. The Georgia Supreme Court held a similar proceeding in March — grilling an attorney on the record about AI use in a brief, with the exchange captured on video that circulated widely in legal and tech media.

The Georgia episode followed the same script: suspected AI use, a brief riddled with errors, a public dressing-down by justices. The attorney, like Lake, denied AI use under questioning. The court, like Nebraska's, found the explanation unconvincing.

These supreme court proceedings are not primarily judicial enforcement actions. They are performances. They serve a function beyond the adjudication of the specific brief's defects: they communicate to every attorney watching that AI use is professionally dangerous — that it will be treated not as a competence question to be resolved through supervision and training, but as a moral failing to be exposed in the most public venue available to the legal establishment.

Carla Wale, associate dean at the University of Washington School of Law and a leading voice on AI ethics in legal education, acknowledged in the same NPR report that the ethical rules governing AI use are not settled: "I don't think there is a consensus beyond, 'You have to make sure it's correct.' And so for us, that is the baseline." She is designing optional AI ethics training for students interested in using the technology responsibly.

Optional. Voluntary. For students who are interested. Meanwhile, the courts are issuing mandatory, involuntary career consequences — up to and including suspension — for attorneys who fail to clear a bar that the profession's leading ethics educator describes as lacking consensus.

ABA Opinion 512 and the Moving Target of Compliance

ABA Formal Opinion 512, issued in 2024, addressed attorney obligations when using generative AI. Its framework covers competence (you must understand the technology well enough to use it), confidentiality (client data must be protected when passed to commercial AI systems), candor (you cannot submit AI-generated content you haven't verified), and supervision (you remain responsible for AI-assisted work product). It warned that generic consent language is not sufficient when client data may flow through commercial AI platforms.

The opinion was, by ABA standards, thoughtful. It did not ban AI use. It set standards for responsible use — standards that, if applied consistently, would govern judges using AI for legal research and document review with the same force they apply to attorneys.

But ABA Opinion 512 governs attorneys, not judges. Judges are not subject to ABA professional rules in the same way attorneys are. The asymmetry is structural: the institution that enforces professional standards on the legal profession is not itself subject to those standards in the same way. Courts impose the rules. Courts apply the sanctions. Courts are not sanctioned.

This is not a new dynamic in professional regulation. Bar associations have always been in the business of policing practitioners while the institutions those practitioners appear before operate under different accountability structures. But in the context of AI — where the technology is advancing faster than any regulatory framework can track, and where the gap between permissible judicial use and impermissible attorney use has no principled justification — the asymmetry has become visible in a way it rarely has before.

The Northwestern study's finding that 45.5% of judges using AI have received no training from their courts is, in this context, not just ironic. It is damning. The courts demanding that attorneys demonstrate AI competence before using the technology are staffed by judges who are using AI without the training they have implicitly required of everyone who appears before them.

The Liability Question Moving Upstream: Nippon Life v. OpenAI

The next front in the legal establishment's engagement with AI may be the most consequential yet. The Nippon Life v. OpenAI lawsuit — a case that has attracted relatively little mainstream media attention given its potential significance — tests a theory that could extend the liability for AI-generated legal errors from individual attorneys to the companies whose tools produced the hallucinations.

If that theory succeeds, the legal liability for AI hallucinations in legal filings could be shared between the attorney who filed the brief and the company whose technology generated the false citation. That reallocation of liability would transform the economics of AI sanctions entirely: the companies with the resources to bear that liability are not the individual practitioners currently being suspended and fined. They are the technology firms whose AI products are used across millions of professional interactions daily.

The legal establishment's instinct, watching this case, is not to welcome a liability framework that might reduce the burden on individual attorneys. It is to resist a framework that might legitimize AI use in legal work by normalizing the idea that AI tools are a professional resource like any other — with product liability, maintenance obligations, and appropriate disclaimers.

If AI tools are products with foreseeable misuse scenarios, then the legal profession's use of AI is a normal technology adoption story, not a professional ethics crisis. And if it is a normal technology adoption story, then the massive sanctions infrastructure the legal establishment has built around AI hallucinations starts to look not like ethics enforcement but like something else: a pricing mechanism. A toll. A deterrent designed to make AI adoption expensive enough that it remains a risk only large firms with compliance infrastructure can afford, while solo practitioners and small firms face career-ending consequences for using the same tools that are quietly automating work in every major law firm in the country.

The Q2 Prediction and What It Means

The ComplexDiscovery analysis that first aggregated the Q1 2026 sanctions data concluded with a prediction: watch Q2 2026 for accelerating numbers. The per-infraction formulas being adopted from Oregon will compound quickly. The Nebraska suspension, if upheld, will signal that bar discipline — not just monetary sanctions — is now on the table for AI-related filing errors. The judicial AI paradox exposed by the Northwestern study will either be addressed by new judicial conduct policies or will deepen into an untenable institutional contradiction.

None of these developments are happening in a vacuum. They are happening as AI adoption in the legal profession is accelerating regardless of the sanctions. Attorney survey data consistently shows that AI use in legal work is increasing, not decreasing, despite the enforcement campaign. The tools are simply too useful. They save time, they reduce costs, they surface research that would take hours to compile manually. The attorneys being sanctioned are, in many cases, not reckless technology cowboys — they are overworked practitioners trying to serve clients in a system that has never adequately resourced the work they are expected to do.

Damien Charlotin, the researcher who has watched this phenomenon longer and more carefully than almost anyone, offered the clearest diagnosis: "I am surprised that people are still doing this when it's been in the news." He meant it as an observation about the persistence of AI hallucination errors despite years of cautionary reporting. But it can be read another way: I am surprised that practitioners keep using AI to try to serve their clients better, despite the professional consequences. The technology is not going away. The work is not going away. The gap between what attorneys are expected to produce and what they are resourced to produce is not going away.

The legal establishment's response to that gap has been to make AI use professionally dangerous enough to deter it. The sanction wave of Q1 2026 — $145,000 and climbing, suspension recommendations, supreme court inquisitions — is the enforcement of that deterrence strategy in its most visible form yet.

It is not working. AI use continues. The hallucinations continue. The sanctions escalate. And 61.6% of federal judges open their AI tools and use them for legal research, without training, without sanctions, without any of the professional accountability they are administering to the practitioners who appear before them.

The paradox is not subtle. It is not ambiguous. It is the most transparent illustration yet of what the legal establishment's AI enforcement campaign is actually about: not the quality of citations in court filings, but the management of a profession that feels its authority threatened by a technology it cannot control — and is doing the only thing powerful institutions know how to do when they cannot control a threat. Making it cost more to try.


Sources and Citations

  • NPR. (Apr. 3, 2026). "Penalties stack up as AI spreads through the legal system." npr.org
  • ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures." complexdiscovery.com
  • WOWT. (Apr. 9, 2026). "Nebraska attorney faces suspension over alleged AI use in state Supreme Court brief." wowt.com
  • San Diego Union-Tribune. (Apr. 4, 2026). "San Diego attorney hit with one of largest ever sanctions for submitting AI-hallucinated filings." sandiegouniontribune.com
  • Noah News. (Apr. 10, 2026). "Courts escalate sanctions as AI hallucinations in legal filings surge in 2026." noah-news.com
  • Charlotin, D. (2026). AI Hallucinations in Legal Proceedings database. HEC Paris Smart Law Hub. damiencharlotin.com
  • Northwestern University / Sedona Conference Journal. (2026). Survey of Federal Judges on AI Tool Use in Judicial Work (502 surveyed; 112 responses).
  • ABA Formal Opinion 512 (2024). Generative Artificial Intelligence Tools.