Independent Legal Ethics Journalism
April 19, 2026

The Gavel and the Gate: How Courts and the Legal Profession Use AI Ethics Rules as a Weapon of Self-Preservation

The Gavel and the Gate: How Courts and the Legal Profession Use AI Ethics Rules as a Weapon of Self-Preservation
⚡ THE BOTTOM LINE
  • Over 1,300 documented cases of courts sanctioning attorneys for AI-generated errors — and climbing
  • Sanctions have escalated from small fines to $109,700 penalties, disbarment referrals, and license suspensions
  • The ABA, state bars, and individual federal judges have each independently piled on with new rules, opinions, and standing orders — creating a patchwork of compliance landmines
  • Meanwhile, courts themselves use AI, make errors, and face zero institutional consequences
  • The result is a chilling effect that protects the legal profession's billable-hour economics — not the public it claims to serve

There is a certain kind of regulatory aggression that happens whenever a profession feels truly threatened. It does not present itself as aggression. It presents itself as ethics.

The legal profession is currently engaged in exactly this kind of aggression — aimed at artificial intelligence and at the lawyers foolish enough to adopt it without perfect execution. And the legal establishment is pulling it off with remarkable confidence, because it controls the rules, the referees, and the narrative.

This is not about protecting clients. This is about protecting the cartel.

The Evidence File: A Two-Year Sanctions Surge

Let's start with the facts, because the facts are genuinely remarkable.

Since mid-2023, courts across the United States — and increasingly around the world — have been sanctioning attorneys who submitted AI-generated briefs containing fabricated citations, invented case names, and nonexistent quotations. Damien Charlotin of HEC Paris Business School, who tracks these incidents through a publicly available database, had documented more than 1,330 cases as of April 2026. Approximately 800 come from U.S. courts alone. On a single given day, it is not unusual for ten different cases from ten different courts to appear in his tracker.

The individual cases tell their own story of escalation:

  • Mata v. Avianca (S.D.N.Y., 2023): The case that started it all. Attorneys Steven Schwartz and Peter LoDuca submitted a brief via ChatGPT containing at least six fictitious case citations. Judge P. Kevin Castel imposed a $5,000 sanction and called the attorneys' conduct an "unprecedented circumstance." Within months, it had become very, very precedented.
  • Kansas patent case (February 3, 2026): A federal judge fined attorneys for a patent holding company a combined $12,000 for filing documents with nonexistent quotations and case citations generated by AI.
  • Federal appeals court (February 18, 2026): The U.S. Court of Appeals ordered a lawyer to pay $2,500 over AI hallucinations in a brief, expressing "frustration" that the problem "shows no sign of abating." Notably, the court's frustration was directed not at the tool — but at lawyers' continued use of it without flawless verification.
  • Winery dispute (2025-2026): A district court found 15 fake citations and eight invented quotations across several briefs, then imposed more than $15,000 on lead counsel plus adverse costs.
  • Oregon (2025): In what researchers believe is the largest single-attorney AI sanction to date, one attorney was hit with $109,700 in penalties.
  • Nebraska (April 9, 2026): The Nebraska Counsel for Discipline formally recommended temporary suspension of Omaha attorney W. Gregory Lake's license. Fifty-seven of his sixty-three citations before the Nebraska Supreme Court were defective — including 20 AI hallucinations and 3 cases that don't exist anywhere in legal history.
  • Georgia (April 2026): Georgia Supreme Court Chief Justice Nels Peterson publicly flagged that prosecutor Deborah Leslie's brief in a capital murder appeal contained at least five nonexistent case citations. Leslie initially denied using AI. She later admitted it. Her district attorney was forced to issue a formal apology to the state's highest court. Leslie now faces State Bar discipline.

The numbers, the frequency, and the severity all move in one direction: up. And running alongside this tsunami of sanctions is an equally aggressive wave of new rules.

The Rulebook Explosion

When courts and bar associations feel threatened, they do what they do best: they write new rules.

In July 2024, the American Bar Association issued Formal Opinion 512 — its first-ever ethics guidance on generative AI. The opinion runs the familiar gauntlet of existing professional conduct rules: competence, confidentiality, communication, candor, supervisory duties, fees. It warns that lawyers using AI "must fully consider their applicable ethical obligations." It notes that using AI to pad hours could constitute fee fraud. It suggests that boilerplate engagement-letter consent won't be enough to cover AI data use. And it concludes, with the kind of majestic vagueness that bar opinions specialize in, that lawyers "must be vigilant" as technology evolves.

Formal Opinion 512 does not forbid AI use. But it threads so many compliance requirements around it — supervision obligations, confidentiality safeguards, competency duties, candor requirements — that it effectively tells lawyers: use AI at your peril, because every rule on the books now applies to everything you do with it.

State bars have done the same, in volume. North Carolina issued "Use of Artificial Intelligence in a Law Practice" (2024 Formal Ethics Opinion 1). California's bar ethics committee published guidance. New York's bar association weighed in with its own framework. Dozens of state-level opinions and guidance documents have proliferated since 2023, each adding their own layer of compliance requirements, risk factors, and warnings.

Courts have gone further still — right down to the individual judge level. After 2023, numerous federal judges issued standing orders requiring lawyers to certify that no generative AI drafted a filing — or, if it did, that a licensed attorney had verified every word. The Northern District of Texas requires an AI statement on the first page of every AI-assisted filing. Missouri's 20th Judicial Circuit demands disclosure of the specific AI tool used. Washington's Clallam County District Court requires attorneys to certify the precise role AI played in any filing.

None of these rules are uniform. None are consistent. They vary by district, by judge, by state, and by court level. The result is a compliance patchwork so fragmented that keeping track of what AI rules apply in which court has itself become a billable legal service. One law firm — no doubt charging by the hour — published a reference guide to AI disclosure requirements across federal districts. Another sold a compliance checklist for state bar AI opinions. The irony practically writes itself.

The Argument They're Making vs. The Argument They're Not Making

Defenders of these rules and sanctions offer a simple, internally coherent argument: AI systems hallucinate, lawyers have a duty of candor to courts, and submitting fabricated citations is a fraud on the judicial system. If attorneys use tools that generate false information and then fail to verify that information, they are violating foundational professional obligations. The sanctions are the natural consequence of that failure. End of story.

This argument has real merit as far as it goes. No serious person argues that attorneys should be permitted to submit unverified AI output to courts without consequences. Verification is a baseline professional obligation, regardless of the tool used. A lawyer who outsources research to an associate and then files the associate's work without review is equally responsible for errors. AI is not a special exception to that rule.

But the argument that the legal profession is making is not just "verify your work." It is something much larger and more consequential — and the legal establishment is extremely careful never to state it plainly. What it is actually saying, through the cumulative weight of its sanctions, opinions, standing orders, and public shaming, is this:

Using AI in legal practice is presumptively risky, professionally dangerous, and ethically suspect — and any error you make with it will be punished more severely than equivalent errors made through traditional means.

That is a very different argument. And when you examine it honestly, it is not about ethics at all. It is about economics and power.

The Cartel Problem

The legal profession occupies a genuinely unusual position in the American economy. It is the only major service industry that regulates its own admission, disciplines its own practitioners, sets its own conduct standards, controls the adjudicative system through which its services are consumed, and — crucially — enjoys a state-enforced monopoly on the provision of legal advice. You cannot practice law without a license. You cannot get a license without law school. Law schools are accredited by the ABA. The ABA is dominated by lawyers. The circle is perfectly closed.

This is not a conspiracy. It is a structural reality. And structural realities have structural incentives. When a technology emerges that threatens to break open any element of that structure — to let clients draft their own briefs with AI assistance, to let pro se litigants access research quality that previously required a BigLaw associate, to let small firms compete with large ones on the quality of their legal research — the structure has every incentive to treat that technology as dangerous.

Artificial intelligence is that technology. It is capable, in its current form and especially in future iterations, of democratizing legal access in ways that would fundamentally disrupt the profession's billable-hour economics. It can produce a first-draft brief in minutes that would take a junior associate hours. It can analyze thousands of cases in seconds that would take a research team days. It can answer basic legal questions with accuracy that would previously have required a consultation fee.

The legal profession cannot openly oppose this democratization. That would be too nakedly self-interested. Instead, it opposes it on ethical grounds — through sanctions regimes, bar opinions, and standing orders that treat AI use as inherently dangerous, compliance-intensive, and professionally perilous.

The mechanism is not a ban. It is a climate of fear.

The Double Standard at the Center

If the legal establishment's AI enforcement were truly about protecting clients and courts, we would expect to see something like proportional application — consequences for AI errors calibrated against consequences for equivalent non-AI errors, and institutional accountability that extends beyond attorneys to the courts themselves.

We see neither.

On the proportionality question: attorneys make citation errors, legal research errors, and factual misrepresentations in filings every day using traditional tools. Westlaw returns bad search results. Associates miss controlling precedent. Briefs mischaracterize holdings. These errors are common, occasionally sanctionable, and generally treated as human fallibility addressed through ordinary professional discipline.

AI-generated errors are treated as something categorically different — an existential threat to the integrity of the judicial system requiring escalating punishment, public shaming, license referrals, and record-setting financial sanctions. In Oregon, an attorney was fined $109,700 for AI errors. No attorney in recent memory has been fined $109,700 for an equivalent volume of conventional research errors. The asymmetry is not a coincidence. It is a message.

On institutional accountability: in Georgia, Chief Justice Nels Peterson — while publicly castigating prosecutor Deborah Leslie for her AI-generated fake citations — acknowledged in the very same address that the judiciary itself must "keep up with AI" because it poses both "risk and opportunity for the judicial system." Courts are using AI. Federal judges have used AI in drafting opinions. Errors have emerged. Zero judges have been sanctioned, disciplined, or publicly shamed for those errors. Zero standing orders require judges to certify that their AI-assisted opinions contain no hallucinated citations.

The rules apply to practitioners. The institutions that make the rules are exempt from them.

What the Culture of Fear Actually Produces

Perhaps the most damning evidence against the "this is about ethics" framing is what the legal profession's AI enforcement regime actually produces in practice.

It does not produce careful AI adoption. It produces concealment.

Greg Lake, standing before the Nebraska Supreme Court, looked eight justices in the eye and said "No, I did not" use AI — after submitting a brief with 20 AI hallucinations and 3 fully fabricated cases. Deborah Leslie initially told the Georgia Supreme Court that her filing had been "altered" rather than admitting she had used AI to draft it. These are not aberrations. These are rational responses to an environment in which AI disclosure is treated as a professional death sentence.

When the profession creates a climate in which honesty about AI use leads to license suspension, and in which even the possibility of AI use triggers career-threatening discipline, it creates strong incentives for exactly the kind of dishonesty that undermines judicial integrity. Then it uses that dishonesty as further evidence that AI cannot be trusted and practitioners using it must be punished more harshly. It is a closed loop that serves no one except the enforcers.

What would actually reduce harm would be clear, uniform standards that treat AI as a tool requiring verification — nothing more and nothing less. A framework that asks whether the attorney exercised reasonable professional judgment in using and reviewing AI output, rather than whether they used AI at all. Standards calibrated to actual harm rather than to institutional anxiety about technological disruption.

Instead, we have a patchwork of 200-plus judicial standing orders, dozens of state bar opinions, and an ABA framework dense with compliance requirements — all of which have produced 1,330 documented sanctions cases and counting, while doing essentially nothing to reduce the underlying error rate.

The Access to Justice Casualty

There is a real victim in all of this, and it is not the legal profession.

Jason Regan — Greg Lake's client in the Nebraska case — is a father who hired a lawyer to help him in a custody dispute. His appeal was dismissed. He owes $52,000 in opposing counsel fees. He told reporters he is "exhausted and frustrated with the legal system" and may not be able to afford to pursue a malpractice claim. The institutional machinery that is now focused on punishing W. Gregory Lake has not offered Regan any remedy.

Legal AI, for all its current imperfections, has genuine potential to improve access to justice for people like Jason Regan. It can reduce the cost of legal services. It can make competent legal research available to people who can't afford BigLaw. It can give pro se litigants a fighting chance in courts where represented parties currently have an overwhelming advantage. These are not hypothetical benefits. They are already emerging, imperfectly, in legal tech tools available to ordinary people today.

The legal establishment's aggressive AI enforcement regime slows this development. It sends the message that AI in legal contexts is too dangerous to adopt, too compliance-intensive to use without extensive safeguards, and too legally risky to trust. Sophisticated, well-resourced law firms can afford to implement those safeguards. Solo practitioners and small firms — the ones most likely to serve ordinary clients who need affordable legal help — cannot. The enforcement regime protects large players and burdens the small ones, which is exactly what you would expect from a regulatory regime designed by the large players.

The Question They Won't Answer

No bar ethics committee and no federal judge issuing AI standing orders has publicly answered the following question:

If AI tools become accurate enough to produce legal research with a hallucination rate lower than that of junior associates, will you revise your enforcement posture — or will you still treat AI use as inherently suspect?

The silence is instructive. Because the answer, if the honest one were given, would reveal that the real objection is not to AI errors. AI systems are improving rapidly, and hallucination rates are declining. The real objection is to the disruption itself — to the possibility that legal work might become cheaper, faster, and more accessible in ways that undermine the structures the profession has built over two centuries.

Wrapping that objection in the language of ethics is a very old legal trick. Call it malpractice. Call it misconduct. Call it a threat to the integrity of the courts. The labels keep changing. The interest being protected does not.

What Would Genuine Reform Look Like

None of this means that AI use in legal practice should be unregulated or consequence-free. It means the regulation should be honest about what it is for and calibrated to serve the public rather than the profession.

Genuine reform would look like this: A single, national standard for AI use verification in legal filings — clear, simple, and uniform — rather than 200-plus different standing orders that vary by judge and create compliance complexity as a feature, not a bug. Sanctions calibrated to actual harm and proportional to equivalent non-AI errors. Judicial accountability provisions that apply the same verification standards to AI-assisted judicial opinions that attorneys face in their AI-assisted briefs. Safe harbor provisions for attorneys who disclose AI use, verify citations, and document their review process. Investment in AI literacy training through continuing legal education rather than through the blunt instrument of sanctions.

And — perhaps most importantly — honest acknowledgment from bar associations and courts that their institutional interests are not identical to the public interest, and that their AI enforcement posture deserves the same skeptical scrutiny they would apply to any other regulated industry that uses ethics language to limit competition.

The legal profession is not the first regulated industry to deploy ethics rules as a competitive moat. It will not be the last. But it is arguably the most dangerous one to permit this — because it controls the courts where every other challenge to this kind of protectionism must ultimately be heard.

The gavel is in their hand. The gate is theirs to open or close. And right now, they are choosing to close it.


Sources: Reuters (February 3 and 18, 2026), Damien Charlotin AI Hallucination Cases Database (damiencharlotin.com), WOWT Omaha, Nebraska Public Media, FOX 5 Atlanta, NPR, Georgia Public Broadcasting, ABA Formal Opinion 512 (July 2024), Vermont Law Review, LegalSoul, American Bar Association Litigation News (March 2025), Sterne Kessler Goldstein Fox (January 2026).