Quick Facts
- The Case: Maldonado v. Professional Animal Retirement Center d/b/a Black Pine Animal Sanctuary, N.D. Indiana, dismissed April 1, 2026
- The Attorney: Roger Roots, Rhode Island-based attorney representing Joe Exotic (Joseph Maldonado), the Tiger King himself — currently serving a federal prison sentence for animal abuse and attempted murder
- The Sanction: $1,500 monetary penalty plus referral to Rhode Island disciplinary authorities — a career-threatening bar discipline referral triggered by AI-generated hallucinations in the complaint and briefing
- The Errors: Court found "opinions and citations provided were woefully mispresented or else nonexistent" — the signature of AI hallucination across hundreds of U.S. cases
- The Excuses: Roots blamed a "medical emergency" that resulted in reliance on a paralegal's work; court found his explanations unconvincing and internally contradictory
- The Pattern: Q1 2026 has produced over $145,000 in AI-related attorney sanctions across U.S. courts; Nebraska attorney Greg Lake faces suspension after 57 of 63 citations in a Supreme Court brief were erroneous; the AI sanction wave continues to accelerate
- The Gatekeeping Angle: The same Indiana court that sanctioned Roots for AI-assisted briefing belongs to a federal judiciary in which 61.6% of judges use AI tools in their own work — without disclosure, without sanctions, without accountability
- Sources: Reason.com/Volokh Conspiracy (Apr. 12, 2026); Maldonado v. PARC opinion (Apr. 1, 2026); ComplexDiscovery (Apr. 9, 2026); WOWT Nebraska (Apr. 9, 2026); Northwestern University/Sedona Conference Journal (Mar. 2026)
The court released its opinion on April 1, 2026. The choice of date was intentional — the judge said so. A case involving a federally incarcerated former reality-TV star known as the Tiger King, four retired circus tigers housed at an Indiana wildlife sanctuary, and an attorney who filed court documents riddled with hallucinated legal citations wound up being resolved on April Fools' Day, and the court couldn't resist noting the timing.
"Releasing this opinion on April 1 was a nice touch," the court wrote.
But the joke, as always when the legal profession encounters AI, has a target — and the target is not Joe Exotic. The target is Roger Roots, the Rhode Island attorney who agreed to represent Joseph Maldonado, aka the Tiger King, in his Endangered Species Act lawsuit against the Black Pine Animal Sanctuary. Roots walked into a federal courtroom with a complaint and supporting briefs that contained, in the court's words, citations and authorities that were "woefully mispresented or else nonexistent."
The sanction imposed was $1,500 — modest by the standards of the 2026 AI discipline wave, which has produced penalties exceeding $109,000 against a single attorney in Oregon and a suspension recommendation against a Nebraska lawyer whose Supreme Court brief contained errors in 57 of 63 citations. But the $1,500 was accompanied by something potentially far more consequential: a referral to the Rhode Island disciplinary authorities. In the legal profession's sanction architecture, that referral is where careers end.
And yet, as the Tiger King case reveals with its mix of absurdity and institutional earnestness, the sanctions being imposed on attorneys for AI-related errors are not calibrated to fix the hallucination problem. They are calibrated to fix the profession's AI problem — which is a different thing entirely.
The Case: Joe Exotic, Four Tigers, and a Complaint Full of Fake Citations
The underlying lawsuit is straightforwardly strange. Joseph Maldonado — convicted in 2019 of attempting to hire a hitman to murder animal rights activist Carole Baskin, and serving a 21-year federal prison sentence — decided to sue the Black Pine Animal Sanctuary in Indiana for allegedly mistreating four of his former tigers. The tigers had been transferred to Black Pine when Maldonado's wildlife park was shut down following his conviction.
Maldonado's legal theory was that Black Pine had violated the Endangered Species Act's prohibition on "taking" endangered species by having the tigers spayed and neutered, forcing them into public observation, and housing them in inadequate enclosures. He sought to bring a citizen suit under the ESA's enforcement provision.
The problem — beyond the general strangeness of a federally imprisoned zoo operator suing a wildlife sanctuary for the welfare of tigers he can no longer visit — was standing. Article III of the Constitution requires that a plaintiff have a concrete, particularized, actual or imminent injury to maintain a lawsuit in federal court. Maldonado's connection to the four tigers was historical and emotional, not current and concrete. He had not visited them. He could not visit them. His release date was decades away. He had sent unnamed "agents" to the sanctuary on his behalf, but the court found this insufficient to establish the kind of concrete injury that the standing doctrine requires.
"The only thing clear at this point is that Maldonado has strong feelings about these cats — but those strong feelings and his hope to work with them in the future are not enough to give this Court subject matter jurisdiction over his claims," the court wrote in dismissing the case.
So far, a routine Article III standing dismissal. Then came the sanctions portion of the opinion, which revealed the second story running beneath the Tiger King sequel: Roger Roots had filed documents containing imaginary legal citations, and the court was not amused.
The Hallucinations: What the Court Found
Courts have now seen AI-generated hallucinations in filings often enough — over 800 documented U.S. cases, according to researcher Damien Charlotin of HEC Paris — that the pattern is immediately recognizable to any federal judge who pays attention to legal technology news. The citations look plausible. They sound authoritative. They simply do not exist — or exist but say something entirely different from what the brief claims they say.
In Roots's case, the court issued a Show Cause Order in February 2026, nearly six months after the complaint was filed in August 2025, directing Roots to explain the inaccuracies and legal misrepresentations that had appeared in the complaint and the subsequent briefing. The court found "opinions and citations provided were woefully mispresented or else nonexistent."
Roots responded in March 2026. He accepted "some responsibility" but emphasized that the errors were not made in bad faith. His explanation invoked a "medical emergency" that had forced him to rely on a paralegal's work product rather than conducting his own research. He implied, without quite saying, that the paralegal — not he — had been using AI, and that the filing of the extended, error-riddled brief had been an inadvertent mistake.
The court found the explanation "unconvincingly" offered and internally contradictory. The heart of the problem was timeline and behavior: Roots had filed a lengthy brief — pages 26 through 37 of which violated Local Rule 7-1(e)(1) governing brief length. When opposing counsel filed a reply noting the local rule violation, Roots did not notify the court of any mistake. Instead, he defended the length of the brief, arguing in a surreply that "[t]he complexity and importance of the issues here — including questions of ESA standing, jurisdiction, and citizen-suit enforcement" justified the extended treatment.
The court laid out the logical contradiction with precision: if Roots intended to file only a short brief, as he later claimed, he should have recognized and corrected the error when opposing counsel flagged the length violation. Instead, he doubled down. "So, which is the Court to believe," the opinion asked: "that the extraordinarily long brief was intentional and should be considered despite its violation of the Local Rules, or that the same brief, riddled with errors, was inadvertently filed instead of a shortened, seemingly more correct brief?"
The court's answer: neither version of the story is fully credible, and the behavior pattern — filing documents with fabricated citations, then defending those documents, then belatedly retreating to an inadvertence defense — is one it has seen before.
"It is abundantly clear," the court concluded, "that Roots did not make the requisite reasonable inquiry into the law in crafting both the Complaint and the response to PARC's Motion to Dismiss. Had he done his due diligence for either filing, he would have discovered that the opinions and citations he provided were woefully mispresented or else nonexistent. Whether these incorrect filings are the work of generative AI or counsel's own sloppiness, the resulting errors and legal misrepresentations are glaring."
$1,500. And a referral to Rhode Island.
The Architecture of the AI Sanction System: Why a $1,500 Fine Is the Least of Roger Roots's Problems
To understand the significance of the Roots sanction, it is necessary to understand what a bar disciplinary referral actually does in the professional lives of attorneys.
The monetary sanction — $1,500 — is the least consequential part of what happened to Roger Roots in the Indiana courtroom. Fifteen hundred dollars is expensive but not catastrophic for an attorney with an active practice. It is a fine, not a financial body blow.
The referral to the Rhode Island disciplinary authorities is different in kind. It triggers an independent investigation — an investigation over which Roots has no control, which proceeds on its own timeline, which can result in outcomes ranging from a private admonition to a public reprimand to a suspension to disbarment. It places a permanent notation in the professional disciplinary record that follows an attorney for the remainder of their career. Every client background check, every bar admission application in another state, every judicial appointment process — all of them will surface the disciplinary referral.
The legal profession's disciplinary architecture is designed to take the most serious consequences of any AI-related incident out of the sanctioning court's hands and deposit them with a separate institution — the state bar — that operates without the time pressure, the case-specific focus, or the remedial orientation that constrains a trial court. A district court judge imposing sanctions is thinking about the case before her. The Rhode Island disciplinary authorities, reviewing Roots's referral, will think about his fitness to practice law generally.
This is not incidental to the design of the AI sanction system. It is the system's most powerful feature: a modest monetary sanction that would not deter most attorneys becomes, through the referral mechanism, the opening of a potentially career-ending parallel proceeding that the attorney cannot control, cannot predict, and cannot contain by good performance in the original case.
Roger Roots filed a complaint with bad citations. The case was dismissed on standing grounds that had nothing to do with the bad citations. The client — who is in federal prison and has limited ability to be harmed further by the dismissal — received no compensation from the sanction. But Roots now faces a Rhode Island disciplinary investigation that could end his ability to practice law in the state where he is licensed.
This is not proportionate. It is architectural. It is how the system is designed to work.
The Wave Context: Nebraska, Oregon, Alabama, and the Q1 2026 Enforcement Surge
The Roots sanction does not exist in isolation. It is the latest in a Q1 2026 enforcement surge that has produced more AI-related attorney sanctions than any previous three-month period in American legal history.
In Oregon, U.S. Magistrate Judge Mark Clarke imposed $109,700 in combined sanctions and adverse costs against a single attorney — the largest AI-related penalty in U.S. history. The case involved a family dispute over a vineyard and winery, fifteen fabricated citations, and eight invented quotations, all processed through the per-infraction formula that Oregon courts established in December 2025: $500 per fabricated citation, $1,000 per fabricated quotation.
In Alabama, a federal magistrate judge recommended that attorney Franklin Hollis Eaton Jr. be found "incompetent to practice law" after AI-hallucinated citations became the final entry in a years-long record of professional failures, and referred the matter to the Alabama State Bar. In a separate Alabama case, attorney Joshua B. Watkins and his law firm Burrill Watkins LLC were ordered to distribute a detailed public reprimand order to every client, every opposing counsel, and every presiding judge in every pending case they handle — an unprecedented mass-notification sanction that weaponized the attorney as the instrument of his own professional destruction.
In Nebraska, the situation as of April 9, 2026 is still unfolding. The Nebraska Counsel for Discipline has moved for the temporary suspension of Omaha attorney Greg Lake after his Supreme Court brief in a divorce case contained errors in 57 of 63 citations. Lake denied using AI to the justices who questioned him on the record, blaming a computer malfunction on his wedding anniversary that he claimed caused him to file the wrong version of the brief. The Nebraska Supreme Court found his explanation "lack[ed] credibility." Lake now waits for the court's ruling on whether to impose the suspension — while his client, Jason Regan, faces $17,000 in opposing counsel fees and $35,000 in ex-wife's legal fees as a direct consequence of his attorney's AI-related misconduct.
The Q1 2026 total: at least $145,000 in sanctions, multiple bar referrals, an incompetency finding recommendation, an unprecedented mass-notification order, and now the Tiger King attorney added to the roll.
Researcher Damien Charlotin of HEC Paris, who maintains the world's most comprehensive database of AI hallucination cases in legal proceedings, reported that in early April 2026, he counted ten courts flagging AI-fabricated filings on a single day. He has now documented over 1,200 cases worldwide, approximately 800 from U.S. courts. The pace is accelerating, not slowing.
The Institutional Paradox: What the Courts Are Not Saying About Their Own AI Use
Every sanction order in the Q1 2026 enforcement surge has one thing in common: none of them mention that federal judges are using AI in their judicial work at a documented rate of 61.6%, according to a Northwestern University survey published in the Sedona Conference Journal in March 2026.
The Northwestern survey found that 30% of responding federal judges use AI for legal research — the same activity that gets attorneys sanctioned. Twenty-two percent use AI daily or weekly. And 45.5% of judges who use AI received no training from their courts on how to do so.
No judge has been sanctioned for using AI. No judge has been required to disclose AI use. No judge has faced a bar referral because an AI tool produced an inaccurate summary that influenced a judicial outcome. The accountability infrastructure that the legal profession has constructed around AI use — the disclosure requirements, the certification obligations, the sanction formulas, the bar referral mechanisms — applies exclusively to the practitioners who appear before the courts, not to the courts themselves.
This is not a coincidence. It is the architecture of institutional power.
At the IAPP Global Summit on April 7, 2026, Chief Judge James Boasberg of the U.S. District Court for the District of Columbia — the same judicial institution that issued the AI sanctions that produced $145,000 in Q1 penalties — floated the idea that AI might be capable of rendering judicial decisions in administrative hearings. "If AI were 95% accurate, would people come to say, 'I'd rather take AI at 95% than wait years for a judge?'" Boasberg asked. Then he added: "I'd feel lucky if I were 95% accurate."
A judge who acknowledges that human judicial accuracy may be below 95% is simultaneously presiding over a system that imposes career-ending consequences on attorneys whose AI-assisted work contains errors. The logical gap in this position is not subtle: if judges are willing to accept AI at 95% accuracy for judicial decision-making, why are they imposing sanctions against attorneys whose AI tools produce errors at rates that are empirically similar to the human error rate in judicial work?
The answer, which the courts have never explicitly articulated but which their behavior makes plain, is that the issue is not error rates. It is control.
The Decline Context: What AI Actually Threatens
The legal profession's aggressive AI sanction campaign makes more sense when viewed against the backdrop of a profession in structural decline. Law school applications peaked in 2004 and have never fully recovered. Legal aid budgets have been cut repeatedly. Mid-size firm profitability has stagnated. The BigLaw partnership track has narrowed to a shrinking slice of a shrinking market. The legal services industry is consolidating, commoditizing, and — most threatening of all — automating.
AI does not threaten to take the most complex, judgment-intensive legal work: the Supreme Court argument, the bet-the-company trial, the novel constitutional challenge. What AI threatens to take is the volume work — the contract review, the due diligence, the legal research memoranda, the routine motions, the first drafts of everything. These are the tasks that have always been performed by junior associates, that have always been billed at hourly rates to clients who paid them, and that have always served as the training ground through which lawyers learned to do the more complex work.
If AI can perform this volume work faster and cheaper — and it can, and it is — then the profession's economic model is under genuine threat. Not because AI will replace experienced attorneys. Because AI will replace the work that justifies paying dozens of junior attorneys to produce billable hours, and the institutions built on that model — the large firms, the law schools, the bar associations — will face a revenue crisis.
Joe Patrice, senior editor at Above the Law, identified this dynamic directly in a 2026 interview with NPR: AI tools are "being forced into almost all the software that lawyers use," and the billable hours model is genuinely at risk. "There are two options," Patrice said. "The lawyers can agree to take less — pause for laughter — or they can start finding a new way to bill."
The sanction regime is not a way to find a new billing model. It is a way to slow the adoption of the technology that requires one. By making AI use professionally dangerous — by converting every AI error into a potential career-ending event — the disciplinary apparatus creates a chilling effect that preserves the traditional billing model for as long as the deterrence holds.
Roger Roots filed a complaint with bad citations for a client who cannot pay anyone to do anything because he is in federal prison for attempted murder. The sanction will not deter large law firms from using AI. It will not affect the compliance departments at Skadden or Sullivan & Cromwell. It will affect solo practitioners, small-firm attorneys, and lawyers who represent clients who cannot afford enterprise-tier AI governance infrastructure — exactly the practitioners whose adoption of AI would be most beneficial to the clients who most need affordable legal services.
The Paralegal Defense and the Real Accountability Question
One detail of the Roots case deserves special attention: the attorney's explanation that his errors were the result of relying on a paralegal's work during a medical emergency. The court found the explanation unconvincing, but the explanation raises a real question about how professional responsibility doctrine applies to AI adoption in legal practice.
Attorneys have always relied on the work of non-attorney staff: paralegals, legal assistants, law clerks, research assistants. The professional responsibility framework has always placed ultimate responsibility for the work product on the supervising attorney, regardless of who performed the underlying research or drafting. If a paralegal conducts defective legal research that the attorney fails to verify, the attorney is responsible for the defective filing.
The Sixth Circuit articulated this in its April 3, 2026 opinion in United States v. Farris: "Attorneys should not utilize technology without knowing the ways in which it can be misused or contribute to inaccuracies." Whether the flawed research came from a hallucinating AI or a hallucinating paralegal, the attorney who signed the filing bears the professional responsibility.
This principle is not wrong. But its application in the AI context raises questions about proportionality and purpose that the courts have consistently refused to engage. The standard being applied to AI-assisted work — verify every citation, certify every fact, demonstrate complete mastery of the tool's failure modes — is more demanding than the standard historically applied to attorney supervision of human research assistants. No attorney has ever been sanctioned for failing to re-run every Westlaw search their paralegal conducted to verify that the results were accurate. Yet attorneys are being sanctioned for failing to verify AI-generated research with equivalent thoroughness.
The differential standard reveals differential purpose. AI is being held to a higher standard than human legal support staff not because AI is less accurate — in many legal research tasks, it is more accurate — but because AI is threatening in a way that human legal support staff is not. Paralegals do not replace attorneys. AI can.
April 1 and the Self-Awareness the Court Almost Had
The Indiana court's choice to note the April 1 release date — "Releasing this opinion on April 1 was a nice touch" — is a small moment of judicial levity in a body of legal work that has been almost entirely devoid of self-awareness about what the AI sanction campaign actually represents.
The court's self-congratulation about the timing reveals an institution that finds the Tiger King case amusing — that sees in Joe Exotic's ESA lawsuit and Roger Roots's hallucinated citations a kind of legal absurdism worthy of an April Fools' Day reveal. The tigers, the prison bars, the celebrity, the fake citations: it makes for a good story.
What the court does not note, with equal self-awareness, is the other side of the April Fools' joke: that the same federal judiciary conducting this sanctioning is also using AI in its own work, without disclosure, without sanctions, and with no mechanism for accountability if the AI produces an error that affects an actual judicial outcome. That the system designed to protect the integrity of legal filings exempts the people who write the rulings from the obligations it imposes on the people who write the briefs.
Roger Roots made a mistake. The filing was bad. The citations were imaginary. The court was right to sanction him. But the sanction does not exist in a vacuum of pure ethics enforcement. It exists in an institutional context in which AI hallucinations committed by attorneys produce career-threatening referrals to disciplinary authorities, while AI tools used by judges produce nothing but more efficient judicial work product.
That asymmetry is not a nice touch. It is the point.
The Accelerating Pattern and Where It Leads
The Q1 2026 enforcement surge — $145,000 in sanctions, multiple bar referrals, an incompetency finding, a mass-notification order, a suspension recommendation, and now the Tiger King attorney — represents the most concentrated period of AI-related attorney discipline in American legal history. And it is accelerating.
The ComplexDiscovery analysis that first aggregated the Q1 data predicted that Q2 2026 would produce higher numbers. The per-infraction formulas adopted from Oregon's December 2025 order are spreading to other jurisdictions. The Nebraska suspension, if imposed, will signal that bar discipline — not merely monetary sanctions — is now a standard consequence for AI-related filing errors. The judicial AI paradox exposed by the Northwestern survey will either produce a reckoning with the institutional double standard, or will deepen into an established framework in which the rules apply asymmetrically by design.
The technology is not going away. Attorney survey data shows AI adoption continuing despite the sanctions — the tools are simply too useful. The attorneys being sanctioned are not technological cowboys recklessly disregarding professional obligations. Most of them are overworked practitioners trying to serve clients in a system that has never adequately resourced the work they are expected to do. The AI represents a path to doing that work better, faster, and more affordably — and the profession's response has been to make the path as dangerous as possible.
Joe Exotic is still in prison. His tigers are still at Black Pine. The case was dismissed for lack of standing, which means the underlying question of whether the tigers are being mistreated — the question Maldonado claimed to care about — was never reached. Roger Roots is facing a Rhode Island disciplinary inquiry that could end his ability to practice law.
The tigers, as far as the legal system is concerned, are doing fine.
Sources and Citations
- The Volokh Conspiracy / Reason.com. (Apr. 12, 2026). "Tiger King Attorney Sanctioned for Filing Complaint with AI Hallucinations." reason.com
- Maldonado v. Professional Animal Retirement Center d/b/a Black Pine Animal Sanctuary, N.D. Indiana (Apr. 1, 2026). Opinion PDF
- ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures." complexdiscovery.com
- WOWT / 1011now. (Apr. 9, 2026). "Nebraska attorney faces suspension over alleged AI use in state Supreme Court brief." wowt.com
- IAPP. (Apr. 7, 2026). "US federal judges discuss the intersection of emerging technology, AI with the legal system." iapp.org
- Northwestern University / Sedona Conference Journal. (Mar. 2026). Survey of Federal Judges on AI Tool Use in Judicial Work.
- OPB / NPR. (Apr. 3, 2026). "Penalties stack up as AI spreads through the legal system." opb.org
- Charlotin, D. (2026). AI Hallucinations in Legal Proceedings — Worldwide Tracker. damiencharlotin.com
- U.S. Court of Appeals for the Sixth Circuit. United States v. Farris, Opinion (Apr. 3, 2026).
- ABA Model Rules of Professional Conduct, Rule 1.1 (Competence); Rule 3.3 (Candor Toward the Tribunal); Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance).
