Independent Legal Ethics Journalism
April 9, 2026

Supreme Courts Strike Back: Nebraska Attorney Faces Suspension, Georgia Prosecutor Caught Lying to Justices Over AI Citations

Supreme Courts Strike Back: Nebraska Attorney Faces Suspension, Georgia Prosecutor Caught Lying to Justices Over AI Citations
⚡ QUICK FACTS
  • Who: W. Gregory Lake (Omaha, NE) and Deborah Leslie (Clayton County DA's Office, GA)
  • What: Both attorneys caught submitting AI-generated fake citations to state supreme courts
  • Nebraska: 57 of 63 citations in Lake's brief were defective — including 20 AI hallucinations and 3 fully fabricated cases
  • Georgia: At least 5 nonexistent citations filed before Georgia Supreme Court in high-profile murder appeal
  • Nebraska escalation: Counsel for Discipline recommends temporary suspension of Lake's law license — April 9, 2026
  • Georgia fallout: Leslie faces State Bar grievance, suspension, and loss of privileges — DA publicly apologized to Chief Justice
  • Client damage (Nebraska): Jason Regan now owes $52,000 in opposing counsel fees; his appeal was dismissed
  • Sources: WOWT (Omaha), Nebraska Public Media, FOX 5 Atlanta, NPR, divorce.law

In the span of a single week, two separate state supreme courts — Nebraska and Georgia — delivered what may be the legal establishment's clearest message yet about artificial intelligence: use it, and we will destroy your career.

On April 9, 2026, a Nebraska disciplinary committee recommended the temporary suspension of Omaha attorney W. Gregory Lake's law license after 57 of his 63 citations before the Nebraska Supreme Court turned out to be defective. Simultaneously, in Georgia, the Clayton County District Attorney issued a formal apology to Chief Justice Nels Peterson after one of her prosecutors admitted to submitting AI-generated fake citations in a capital murder appeal — and initially lied to the court about it.

Together, these cases signal a dangerous new escalation in the legal profession's AI crackdown. What began as fines and reprimands has now evolved into something far more blunt: license suspension as a deterrent weapon, wielded not just by federal judges but by the very highest courts in the states.

Nebraska: The 57-Citation Catastrophe

The Greg Lake story began in February 2026, when the Nebraska Supreme Court held what can only be described as a very uncomfortable oral argument in a divorce appeal. Lake, a partner at Plains Legal Group in Omaha, stood before the state's highest court and had to explain why 57 of his 63 brief citations were defective — including 20 AI hallucinations and 3 cases that simply do not exist anywhere in legal history.

"And your brief had a number of errors in it that were submitted," one justice said. "Can you explain to us how that occurred?"

Lake's answer was remarkable. He blamed a computer malfunction. He said he'd been flying on his 10th wedding anniversary, his laptop broke mid-flight, and he accidentally uploaded an early draft. He claimed his writing process involved "sticking in things he knew wouldn't pass muster" as placeholders — and that somehow 57 of those placeholders made it through.

Then came the direct question no attorney wants to answer in front of the full bench of a state supreme court: "The elephant in the room is whether or not you used artificial intelligence. Did you?"

"No, I did not," Lake replied.

The Nebraska Supreme Court did not believe him. In its March 20 ruling, the court dismissed the appeal and noted that Lake's explanation "lacks credibility." It referred him to the Nebraska Counsel for Discipline — the same body that, on April 9, 2026, recommended his temporary suspension from the practice of law.

The Real Victim: Jason Regan

In the institutional narrative, this story is about an attorney's professional misconduct. But step back from the headlines and consider what happened to Lake's actual client.

Jason Regan came to Greg Lake seeking help in a custody dispute — a father fighting to remain in his daughter's life. Now, thanks to his attorney's defective brief, Regan's appeal has been dismissed. He's facing $17,000 in opposing counsel fees due immediately, and an additional $35,000 more. That's $52,000 in damages inflicted on an ordinary person who just wanted a lawyer to do his job.

Regan told reporters he is "exhausted and frustrated with the legal system" and isn't sure he can afford to hire a malpractice attorney to go after Lake. The gatekeepers who were supposed to protect him — the state bar, the courts, the disciplinary system — are now busy protecting the profession's reputation while Regan pays the bill.

This is what the legal establishment's war on AI actually looks like in practice. When courts pile on with sanctions, discipline, and fee awards, it is real people — clients like Jason Regan — who absorb the collateral damage.

Georgia: Fake Citations, Then Lies, Before the State's Highest Court

The Nebraska story might have remained an isolated embarrassment, except that Georgia delivered its own version within days.

In the appeal of Hannah Payne — a high-profile murder case involving a 2019 hit-and-run killing — Clayton County prosecutor Deborah Leslie filed a brief before the Supreme Court of Georgia. Georgia Supreme Court Chief Justice Nels Peterson publicly flagged that the brief contained "at least five citations to cases that don't exist, and at least five more citations to cases that do not support the proposition for which they're cited."

Leslie's initial response was to claim the filing had been "altered." She did not immediately admit to using AI.

She later reversed course and admitted to using AI tools to draft the filing.

District Attorney Tasha Mosley — who has nearly 30 years in the legal profession — was forced to issue a public apology letter to the Chief Justice. "In my almost 30-year career as an attorney and 17 years as an elected official," Mosley wrote, "I never imagined a situation where I would do what I am doing now."

Leslie now faces a State Bar grievance, internal suspension, and loss of office privileges. Mosley told the court her office was expanding its policies to specifically address AI use — the classic institutional response: more rules, more compliance requirements, more gatekeeping, while the underlying technology advances regardless.

The Escalation Pattern: From Fines to Suspensions

This is no longer about isolated incidents. According to researcher Damien Charlotin of HEC Paris Business School — who tracks AI-related court sanctions globally — there have been more than 1,200 documented cases of courts sanctioning attorneys for erroneous AI-generated content, with approximately 800 from U.S. courts alone. Ten cases from ten different courts in a single day is not unusual anymore.

What is new is the escalation in severity. The trajectory is clear:

  • 2023–2024: Reprimands, small fines, public embarrassment
  • Late 2024–2025: Significant monetary sanctions ($3,000–$15,000)
  • Early 2026: Record-breaking penalties ($109,700 against Brigandi in Oregon)
  • April 2026: License suspension recommendations — the nuclear option

Each escalation is justified by the profession on ethics grounds. But the pattern tells a different story: as AI tools become cheaper, faster, and more useful, the legal establishment's disciplinary apparatus becomes correspondingly more aggressive. Courts that are themselves beginning to feel the disruption of AI — and quietly acknowledge they are not immune, since federal judges have also produced hallucinated citations — are nevertheless applying increasingly severe punishment to practitioners who adopt it imperfectly.

The Denial Problem: A Systemic Pattern

One of the most striking features of both the Lake and Leslie cases is the initial denial. Lake told Nebraska's highest court directly: "No, I did not" use AI. Leslie initially blamed document alteration. These denials are not aberrations — they reflect something deeper about the professional environment that the legal establishment has created around AI.

When the threat of career destruction hangs over every AI disclosure, attorneys are incentivized to lie. When the bar treats AI use as presumptively unethical rather than as a tool requiring verification, lawyers facing discipline calculate that honesty will be punished more harshly than denial.

The legal profession has not created a culture of responsible AI adoption. It has created a culture of fear and concealment — and then punishes attorneys for behaving exactly as that culture would predict.

What “Responsible AI Use” Actually Looks Like

University of Washington Law School Associate Dean Carla Wale, who is developing optional AI ethics training for law students, told NPR that the ethical rules "aren't completely settled." Her baseline standard: "You have to make sure it's correct."

That's true, and uncontroversial. Nobody is arguing that attorneys should submit unverified AI output. The question is whether the profession's response to these early-adoption failures — massive sanctions, license suspensions, public shaming at oral argument — is proportionate to the actual harm, or whether it reflects institutional self-interest dressed up as ethics enforcement.

Consider the double standard: Courts themselves have produced AI-assisted opinions containing errors. Federal judges, as Chief Justice Peterson himself acknowledged during Georgia's State of the Judiciary address, must "keep up" with AI because it "poses both risk and opportunity for the judicial system." The judiciary uses AI. The judiciary makes errors. The judiciary does not suspend its own members for those errors.

But when an attorney submits an imperfect AI-assisted brief, the machinery of professional discipline grinds into action — fines, suspensions, public reprimands, client fee awards, referrals to state bars. The asymmetry is not subtle.

The Gatekeeping Reality

The legal profession controls its own admission, its own discipline, and — through the judiciary — its own adjudication. This is an unusual degree of self-regulatory power for any industry. And when a technology emerges that threatens to democratize legal services — to let clients draft their own briefs, to let pro se litigants access competent legal analysis, to let smaller firms compete with BigLaw — the profession's regulatory apparatus has strong institutional incentives to treat that technology as dangerous.

The stated concern is client protection. The actual mechanism is career destruction for early adopters. The inevitable result is to slow AI adoption, preserve billable-hour economics, and maintain access barriers that serve attorneys far more than they serve the public.

Greg Lake's client Jason Regan is now $52,000 poorer and has lost his custody appeal. Deborah Leslie's AI error may affect the outcome of a murder conviction appeal. These are real harms, and attorneys bear responsibility for them.

But the response — license suspension, State Bar grievances, public humiliation before the highest courts — is not calibrated to prevent harm. It is calibrated to send a message. And the message is: AI use in this profession will cost you everything.

That message serves the legal establishment. It does not serve the public.


Sources: WOWT (Omaha), Nebraska Public Media, divorce.law, FOX 5 Atlanta, CBS Atlanta, Atlanta News First, NPR, Georgia Public Broadcasting, Damien Charlotin hallucination tracker (damiencharlotin.com)