Independent Legal Ethics Journalism
April 11, 2026

"Found Incompetent to Practice Law": How the AI Sanction Wave Is Using the Bar's Ultimate Weapon

"Found Incompetent to Practice Law": How the AI Sanction Wave Is Using the Bar's Ultimate Weapon

Quick Facts

  • The Attorney: Franklin Hollis Eaton Jr., admitted to the Alabama State Bar, representing plaintiffs in a federal bench trial that spanned years
  • The Sanction: Approximately $55,000 in combined sanctions and adverse costs — the latest in a wave of career-ending AI penalties that totaled $145,000 in Q1 2026 alone
  • The Finding: A federal magistrate judge recommended that Eaton be found "incompetent to practice law" and referred his case to the Alabama State Bar "for any disposition it deems appropriate"
  • The AI Connection: Eaton submitted court filings containing citations to cases that "either do not exist," went to "wrong case names," or "did not relate to the issues at hand" — the same hallucination pattern seen in more than 800 U.S. cases and counting
  • The Scale: Researcher Damien Charlotin of HEC Paris documents a worldwide tally of AI hallucination sanctions; in April 2026 alone, there were "10 cases from 10 different courts on a single day"
  • The Broader Context: The legal profession's response to AI adoption — exponentially escalating sanctions, bar referrals, public reprimands, and now incompetency findings — is accelerating as AI tools become more embedded in legal practice, not less
  • Sources: U.S. District Court opinion (Apr. 2026); The Volokh Conspiracy/Reason.com (Apr. 6, 2026); OPB/NPR (Apr. 3, 2026); Damien Charlotin AI Hallucinations Tracker (Apr. 2026)

The opinion begins quietly, the way most federal judicial opinions do — with a recitation of procedural history, a list of prior rulings, a summary of the conduct at issue. But by the time a federal magistrate judge in Alabama finished writing his sanctions opinion against attorney Franklin Hollis Eaton Jr., he had done something that most judges are reluctant to do: he said, in plain English, that an attorney who had been practicing law for years was no longer competent to do so.

"The Court concludes at this point that Mr. Eaton demonstrates an unwillingness or inability to meet the minimum standard of competence to practice law," the magistrate wrote. "He has demonstrated incompetence throughout these proceedings, which have ultimately culminated in numerous misrepresentations of the law."

The case was referred to the Alabama State Bar "for any disposition it deems appropriate." The sanctions imposed ran to approximately $55,000. And the trigger for this extraordinary finding — a federal court recommending that a licensed attorney be declared incompetent — was, at least in part, artificial intelligence hallucinations embedded in court filings.

Eaton's case is not, by itself, unprecedented. It is, in fact, representative. It represents the direction that attorney discipline in the age of AI is traveling: from fines, to public reprimands, to disqualification, to career suspension — and now to a federal court formally questioning whether a practicing attorney should be permitted to continue practicing at all.

The question that the legal profession's disciplinary apparatus has not answered — and shows no sign of engaging with — is whether this escalation is a legitimate response to a genuine professional crisis, or whether it is the most effective institutional gatekeeping mechanism the bar has ever constructed.

The Cumulation of Everything

To understand the Eaton case, it is necessary to understand what the magistrate judge meant when he used the word "cumulation." The sanctions opinion is not simply about AI hallucinations. It is about a pattern of professional conduct that the court observed over years of litigation — a pattern in which AI-generated fake citations were the final, unignorable data point in a much larger record.

The court recited that record in detail. Eaton failed to include a jury demand in his complaints, then failed to follow the court's instructions on how to correct the omission — converting a case his clients presumably wanted tried to a jury into a bench trial. His co-counsel repeatedly sought to withdraw; correspondence in the record showed that Eaton filed motions over their objections and placed their signatures on documents without consent. He failed to engage with opposing counsel in drafting joint pretrial documents as required by the court's standing order.

At trial — a bench trial, because of the jury demand failure — Eaton estimated his case would take two days to present. It took twelve. The court found him "woefully unprepared," characterized his witness examinations as "plodding, rambling, unfocused," and noted that he appeared to call witnesses simply because they were present in the gallery rather than because they served a coherent litigation strategy.

Then came the briefs. On July 15, 2025, Eaton filed a response in opposition to a motion for directed verdict. Defense counsel noticed problems and moved quickly: several of the cited cases either did not exist, the citations led to wrong case names, or the cases bore no relation to the legal issues for which they were cited. The court conducted its own independent review and confirmed every allegation — then found additional problems that defense counsel had not identified.

"The insertion of bogus citations and misrepresentation of authorities is not a mere typographical error, nor the subject of reasonable debate," the court wrote. "It is just wrong."

What followed was a familiar pattern: show cause orders, hearings, responses in which the attorney failed to fully account for all identified errors, continuance requests, health claims. The court observed the same evasion that courts across the country have documented in AI misconduct cases — the partial acknowledgment, the buried corrections, the attribution of problems to external circumstances rather than the attorney's own professional failures.

In the end, the court imposed approximately $55,000 in sanctions and recommended the Alabama State Bar find Eaton incompetent to practice law.

The Incompetency Standard: What the Courts Are Building

The magistrate judge's incompetency finding is, in one sense, consistent with the established legal standard. Rule 1.1 of the ABA Model Rules of Professional Conduct requires that a lawyer provide competent representation, which includes "the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation." The disciplinary rules of every state bar contain equivalent provisions. A finding that an attorney submitted fabricated legal citations — citations to cases that do not exist — is, on its face, a finding that the attorney failed to provide competent representation.

But the incompetency finding in the Eaton case goes beyond a Rule 1.1 violation. The magistrate is not simply saying that Eaton's representation was below standard in this matter. He is saying that Eaton lacks the capacity — the "unwillingness or inability" — to meet the minimum standard of competence to practice law at all. That is a finding about the attorney's fundamental professional fitness, not about a specific lapse.

This distinction matters enormously for how the legal profession responds to the AI adoption wave. If AI hallucinations are treated as evidence of fundamental incompetence — rather than as a category of technical error requiring correction and guidance — then the disciplinary response to AI adoption is not reform, it is elimination. Attorneys who use AI and make mistakes are not being taught to use AI better; they are being removed from the profession.

The escalation is documented and accelerating. In Q1 2026, courts imposed at least $145,000 in sanctions against attorneys for AI-related errors — a record. The largest single penalty in U.S. history for AI misconduct, $109,700 against an Oregon attorney, was issued in March 2026. The Eaton case, at $55,000, represents the second-largest penalty in recent months. The $47,000 sanction against Joshua Watkins and Burrill Watkins LLC — covered separately — brought the Q1 total even higher.

Researcher Damien Charlotin of HEC Paris, who maintains a worldwide tally of AI hallucination sanctions, reported in early April 2026 that the number of cases is still increasing. More than 1,200 incidents have been documented worldwide; approximately 800 are from U.S. courts. In a single day in April 2026, Charlotin counted ten sanctioned AI hallucination cases from ten different courts simultaneously.

"We have this issue because AI is just too good — but not perfect," Charlotin told NPR's Martin Kaste.

That framing — AI too good to ignore, but not perfect enough to trust — captures exactly the dilemma that the legal profession's disciplinary apparatus is using as a weapon rather than a problem to be solved.

The Billable Hours Problem That Courts Won't Name

The legal profession's response to AI adoption is, at its surface, about accuracy and competence. Courts and bar authorities cite Rule 1.1, cite Rule 3.3 (candor toward the tribunal), cite Rule 8.4 (misconduct) — a disciplinary framework built over decades for a profession that did not yet have to contend with generative AI. The ethical rules, they say, have not changed. The obligation to verify the accuracy of citations existed before ChatGPT. It continues to exist now.

This framing is technically accurate and substantively misleading. The ethical rules have not changed, but the enforcement has. Courts that treated AI citation errors as matters for corrective orders and modest sanctions in 2023 are now treating them as grounds for disqualification, incompetency findings, and career-ending bar referrals in 2026. The same conduct — submitting a filing with fabricated citations — has attracted penalties that have increased by orders of magnitude in three years.

The reason for this escalation is not simply that courts have grown more impatient. It is that AI has grown more capable — and, as it has grown more capable, it has grown more threatening to the economic model on which the legal profession is built.

Joe Patrice, senior editor at Above the Law, was direct about this dynamic in an interview with NPR in April 2026: AI tools are "being forced into almost all the software that lawyers use," and the billable hours model is genuinely at risk. "There are two options," Patrice said. "The lawyers can agree to take less — pause for laughter — or they can start finding a new way to bill."

The sanctioning regime is the legal profession's answer to that dilemma. By making AI use professionally dangerous — by converting every AI error into potential career destruction rather than a correctable technical mistake — the disciplinary apparatus creates a chilling effect that preserves the traditional billing model. Attorneys who cannot safely use the tools that would make their work faster and cheaper will continue to bill hours for the slower, more expensive work that AI could perform at a fraction of the cost.

This is not a conspiracy; it does not require coordination or intent. It is an emergent property of institutional self-preservation. The people writing the opinions — the judges imposing the sanctions, the bar disciplinarians reviewing the referrals — genuinely believe they are protecting the integrity of the judicial process. They are also, whatever their intentions, constructing a system that makes AI adoption in legal practice extraordinarily costly for the attorneys who most need it.

The Attorneys Who Most Need AI Are the Ones Most Likely to Be Sanctioned

Eaton is not a BigLaw partner with forty associates and a compliance department. He is not a partner at a regional firm with institutional resources, AI governance policies, and a dedicated legal technology team. He is an individual attorney — the kind of practitioner who makes up the overwhelming majority of American lawyers — representing clients in complex federal litigation with the tools available to solo and small-firm practitioners.

The attorneys most likely to be sanctioned for AI hallucination errors are not the attorneys whose employers have already solved the AI problem with enterprise contracts, verification workflows, and mandatory training. They are the attorneys practicing alone or in small firms, handling the kinds of cases — employment discrimination, personal injury, immigration, criminal defense — that cannot command the rates that fund institutional AI compliance infrastructure.

The Oregon attorney who received the $109,700 sanction in March 2026 was not at a major firm. The attorney in Eaton's case is a solo practitioner working federal trial litigation. Joshua Watkins, whose case was decided the same week as Eaton's, was a partner at a small firm — one whose AI governance policies, the court found, were essentially nonexistent.

The pattern is consistent: the lawyers facing career-ending sanctions for AI hallucinations are disproportionately the lawyers practicing in the spaces where clients cannot afford to pay for the human hours that AI could replace. This is the access-to-justice irony at the heart of the sanction wave. The tool that could make legal representation affordable is most dangerous — in disciplinary terms — for the lawyers representing the clients who most need it to be affordable.

The "Mysterious Illness" Defense and the Bar's Answer

The Eaton opinion contains one detail that is worth examining separately, because it illuminates the intersection of the AI problem with the deeper challenge of attorney mental health and substance abuse in the legal profession.

Throughout the litigation, Eaton repeatedly cited a "mysterious illness" as the explanation for his failures — his inability to demand a jury, his lack of trial preparation, his twelve-day examination of a case he said would take two days, his inadequate compliance with discovery orders. The court was, by the end, skeptical: it concluded that Eaton was either "lying about his health situation or, if true, he is no longer capable of practicing trial law."

The legal profession's disciplinary apparatus is not well-designed for attorneys in professional distress. Lawyers' Assistance Programs — confidential referral services operated by state bars — exist in most jurisdictions, but they are voluntary, underutilized, and carry no enforcement mechanism. An attorney who is struggling — with workload, with health, with the cognitive demands of practice — and reaches for AI as a shortcut is not receiving a message from the disciplinary system that says "get help." The message is "you are incompetent, and we are removing you from the profession."

The bar has chosen the punitive response over the rehabilitative one. That choice is consistent with the broader pattern: use the disciplinary apparatus aggressively to signal that AI errors are career-ending, and in doing so, make AI adoption in legal practice a decision that only the most institutionally well-supported attorneys can safely make.

The Law Schools That Are Finally Responding — Maybe

There is one institutional actor in the legal profession that has begun to engage constructively with the AI problem: law schools. Carla Wale, the director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics training for students — a program designed to equip future practitioners with the skills to use AI responsibly before they enter the profession and face the sanction regime.

The word "optional" is doing a great deal of work in that sentence. AI ethics training is optional. AI use in legal practice is effectively mandatory — not because the rules require it, but because the tools are "being forced into almost all the software that lawyers use," as Patrice observed. The mismatch between optional training and mandatory technological integration is, itself, a form of institutional negligence.

Law schools that graduate students without comprehensive AI competency training are sending practitioners into a disciplinary environment that treats AI errors as evidence of fundamental incompetence — without giving those practitioners the skills to avoid those errors. The result is predictable: more Eatons, more Watkinses, more Brigandis. More careers ended for errors that could have been prevented by the training that the institutions charged with professional preparation chose not to require.

The Incompetency Finding and What It Signals

The magistrate judge's recommendation that Eaton be found "incompetent to practice law" is, viewed in isolation, a reasonable response to the conduct described in the opinion. Fabricated citations. Misrepresentations of law. Trial preparation so inadequate that a two-day case became a twelve-day ordeal. Repeated failures to follow court orders and roadmaps. By any standard, the record is disturbing.

But viewed in the context of the broader AI sanction wave, the incompetency finding signals something more significant than a judgment about one attorney's fitness to practice. It signals that the legal profession's disciplinary apparatus is prepared to use the vocabulary of incompetence — of fundamental professional disqualification — to address AI adoption errors. That vocabulary, once deployed, is not easily contained.

If AI hallucinations are evidence of incompetence rather than a category of correctable technical error, then the 800 U.S. attorneys who have already faced sanctions for AI misconduct are potential subjects of competency proceedings. If ten courts sanctioned AI hallucination cases on a single day in April 2026, and if each of those cases generates a bar referral, the state disciplinary apparatus across the country is about to receive a flood of incompetency referrals that it has neither the resources nor the framework to process.

The legal profession has chosen to respond to the AI adoption wave with the most powerful tools in its disciplinary arsenal. The consequences of that choice — for individual attorneys, for clients who depend on them, and for the access-to-justice promise that the profession is nominally committed to — are only beginning to become visible.

Eaton's case will be heard by the Alabama State Bar. The bar will impose whatever discipline "it deems appropriate." And somewhere in Alabama, in Oregon, in Connecticut, in Georgia, in every state where courts are sanctioning attorneys for AI misconduct, other lawyers are watching the outcomes and recalibrating their risk calculus.

The message the sanctions wave is sending is not "use AI responsibly." It is "use AI at your peril." Whether that is the message a profession committed to access to justice should be sending is a question the bar has so far declined to ask.


Sources and Citations

  • The Volokh Conspiracy / Reason.com. (Apr. 6, 2026). "$55K Sanctions Related in Part to AI-Hallucination-Filled Court Filings." reason.com
  • OPB / NPR. (Apr. 3, 2026). "Penalties stack up as AI spreads through the legal system." opb.org
  • Charlotin, D. (2026). AI Hallucinations in Legal Proceedings — Worldwide Tracker. damiencharlotin.com
  • ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures." complexdiscovery.com
  • Above the Law / Patrice, J. (Mar. 2026). "AI Won't Replace Lawyers But Can Create Critical Shortage of Good Ones." abovethelaw.com