Independent Legal Ethics Journalism
April 14, 2026

California Wants to Disbar Lawyers for Using AI. Its Own Judges Are Using It to Write Rulings.

California Wants to Disbar Lawyers for Using AI. Its Own Judges Are Using It to Write Rulings.

Quick Facts

  • The California State Bar action: Disciplinary charges filed against three California attorneys for AI-generated fake citations — Omid Emile Khalifeh (Los Angeles), Steven Thomas Romeyn (Scottsdale, AZ), and Sepideh Ardestani (Beverly Hills) — reported by the Los Angeles Times on April 13, 2026
  • Ardestani's punishment (already imposed): State Bar Court approved a one-year probation with a 30-day license suspension plus 10 hours of mandatory continuing legal education focused on technology — effective April 6, 2026
  • Khalifeh's exposure: Six misconduct charges; used Lexis+ AI and cited one nonexistent case and two cases with "tenuous" relevance; also violated the court's January 2025 standing order requiring AI disclosure. Faces possible suspension or disbarment.
  • Romeyn's exposure: Cited nonexistent and irrelevant cases in an October 2025 personal injury filing in Orange County Superior Court. Faces possible suspension or disbarment.
  • The same courts, different rules: In March 2026, a select panel of L.A. County civil court judges were given access to "Learned Hand," an AI tool that summarizes motions and drafts rulings — already deployed in 10 states. No disclosure requirement. No discipline risk. No accountability.
  • California's declared position: State Bar Chief Trial Counsel George Cardona: "Technology can assist legal practice, but it does not replace an attorney's duty of competence, diligence, and honesty." Cardona's statement did not address how judges using AI to draft rulings squares with this principle.
  • Q1 2026 national context: At least ,000 in AI-related attorney sanctions across U.S. courts in the first three months of 2026 alone — the highest quarterly total in history
  • Sources: Los Angeles Times (Apr. 13, 2026); Governing.com (Mar. 18, 2026); Los Angeles Times (Mar. 18, 2026); ComplexDiscovery (Apr. 9, 2026)

On April 13, 2026, the Los Angeles Times reported that the State Bar of California has filed disciplinary charges against two California attorneys and finalized a punishment against a third — all for the same offense: using artificial intelligence to draft legal filings that contained citations to cases that either did not exist or bore no relation to the legal arguments they were supposed to support.

The attorneys at risk — Omid Emile Khalifeh, Steven Thomas Romeyn, and Sepideh Ardestani — now face consequences ranging from mandatory continuing legal education, to suspension, to disbarment. The California Supreme Court, which has final authority over attorney discipline, may be asked to strip these lawyers of their licenses to practice in the state.

On the same day the LA Times published this story, attorneys practicing in California's courts could pull up a March 2026 Los Angeles Times article reporting that a select panel of L.A. County civil court judges had been given access to an AI tool called "Learned Hand" — a program specifically designed to summarize hundreds of pages of legal motions and draft rulings. The tool, developed for judicial use, is already deployed in courts across 10 states. The judges using it face no disclosure requirement. No mandatory verification obligation. No professional discipline if the AI makes an error. No risk of losing their positions.

The California State Bar is threatening to disbar lawyers for using AI that makes mistakes.

California's courts are equipping judges with AI to draft the very decisions those lawyers are trying to influence.

These two things are happening simultaneously, in the same state, under the same legal system.

If this were a coincidence, it would be remarkable. It is not a coincidence. It is a design.

Three Attorneys, Three Cautionary Tales the Bar Wants You to Notice

The LA Times story, written by Clara Harter and published on April 13, 2026, provides detail on each of the three cases that makes the institutional message clear.

Sepideh Ardestani is a Beverly Hills attorney who submitted a wage-and-hour class-action complaint in federal court in Sacramento in March 2025. The complaint contained citations to cases that didn't exist and citations that were erroneous. When the court confronted her about the errors, Ardestani did not admit to using AI. Instead, she claimed the incorrect citations were the result of handwritten notes she had carried over from another matter. She could not produce documents supporting this explanation.

The Eastern District of California was not impressed. In its order, the court called the time it spent investigating her conduct "a waste of limited time and judicial resources in a district that has labored under a longstanding caseload crisis." On April 6, 2026, the State Bar Court approved a disciplinary stipulation: one year of probation, a 30-day license suspension, and ten hours of mandatory continuing legal education focused on technology — with at least five hours specifically devoted to the "benefits and risks of AI tools in legal work."

Omid Emile Khalifeh is a Los Angeles attorney who used Lexis+ AI to draft documents in a trademark case filed in federal court in Los Angeles. In an April 2025 filing, he cited one case that did not exist and two cases that were not relevant to the arguments for which they were cited. He also failed to comply with the court's standing order, effective January 28, 2025, requiring attorneys to disclose any use of generative AI when submitting filings — a court-specific rule that Khalifeh appears not to have followed.

When the court flagged the errors, Khalifeh responded defensively. He insisted he had used AI but claimed to have independently verified the accuracy of all citations. "Following drafting, I reviewed, revised, and supplemented all portions of the brief, including those that were informed by the use of Lexis+ AI or based on prior templates," he wrote. "I independently verified the factual and legal accuracy of the content." The court pushed back. One citation was nonexistent and two others had only "tenuous" relevance. Khalifeh eventually admitted he could not verify the existence of the nonexistent citation and withdrew it.

The State Bar has now filed six misconduct charges against him. The State Bar Court has not yet ruled. If it recommends suspension or disbarment, the California Supreme Court will have the final word on whether Khalifeh continues to practice law.

Steven Thomas Romeyn is a Scottsdale, Arizona attorney who filed a personal injury brief in October 2025 in Orange County Superior Court — California state court, not federal. The filing contained irrelevant and nonexistent citations. When the court flagged the issues, Romeyn disclosed his AI use and admitted he had verified several citations but had not verified every single one before filing. Charges have been filed. Disbarment is on the table.

The cases are, individually, unremarkable examples of a phenomenon that has now been documented in more than 800 U.S. court cases: attorneys using generative AI tools, submitting the output without adequate verification, and facing professional consequences when the AI hallucinates legal authority that doesn't exist. What makes the California cases notable is not the conduct itself. It is the contrast.

"Learned Hand": The AI Tool California Gave Its Judges

On March 18, 2026, the Los Angeles Times reported that L.A. County civil court judges had begun using an AI tool called "Learned Hand." The tool was developed specifically for judicial use. It can, according to the reporting, rapidly distill hundreds of pages of legal motions into digestible summaries. It can help judges draft rulings in civil cases.

Let that last function register: draft rulings. An AI tool being used by judges to produce the initial text of judicial decisions — in a court system where lawyers in those same courtrooms can be disbarred for allowing an AI tool to produce an incorrect citation in a brief.

Learned Hand is not a California-only experiment. According to the Governing.com report on the program, the tool is already in use by judges in ten states. The L.A. County Superior Court — one of the largest court systems in the United States, processing approximately 1.2 million cases per year — has joined a growing national movement of judicial AI adoption.

The judges using Learned Hand are subject to no disclosure requirement telling litigants that the ruling in their case was drafted with AI assistance. There is no California State Bar charging process for judicial AI misconduct. There is no mandatory CLE requirement ensuring that judges who use Learned Hand understand its limitations and failure modes. There is no mechanism by which a party who received an adverse ruling can discover whether and how AI contributed to that ruling, let alone challenge it on those grounds.

State Bar Chief Trial Counsel George Cardona told the LA Times: "Courts and clients must be able to trust that the filings attorneys submit are accurate, supported, and compliant with professional standards. Technology can assist legal practice, but it does not replace an attorney's duty of competence, diligence, and honesty."

Cardona's statement is about attorneys. It does not address judges. It does not address whether courts and clients must also be able to trust that the rulings judges issue are accurate and free from AI error. The principle — that technology assists but does not replace the human professional's responsibility for accuracy — is announced as a universal truth while being applied exclusively to one half of the legal system.

The Disclosure Double Standard That Defines the Whole System

The Khalifeh case contains a detail that crystallizes the California AI enforcement regime's fundamental asymmetry: the federal court's standing order requiring attorneys to disclose AI use in filings.

The U.S. District Court for the Central District of California issued a standing order on January 28, 2025, requiring attorneys to disclose when they use generative AI to prepare filings. Similar disclosure orders have been adopted by federal courts across the country and by many state courts. The rationale is transparency: judges need to know how documents were prepared so they can assess their reliability.

Khalifeh is charged, in part, with violating this disclosure order. His failure to tell the court that he used Lexis+ AI is itself a separate professional misconduct charge on top of the citation errors.

The court that issued this order — that requires litigants to disclose their AI tools — has simultaneously piloted Learned Hand, an AI tool for drafting judicial decisions. There is no equivalent standing order requiring judges to disclose when they use AI to help draft rulings. There is no form attorneys can file asking the court to certify that the ruling in their case was prepared without AI assistance. The transparency requirement flows in one direction only: from lawyers to courts. The courts owe litigants no parallel transparency about their own AI use.

This is not an oversight. In every jurisdiction that has imposed AI disclosure requirements on attorneys, the same courts have been silent about their own AI adoption. The Northwestern University survey published in the Sedona Conference Journal in March 2026 found that 61.6% of federal judges use AI in their judicial work — with 30% using AI specifically for legal research, the same activity that gets attorneys sanctioned. And 45.5% of those judges received no training from their courts on how to use it responsibly.

Attorneys who use AI without training face disbarment. Judges who use AI without training face nothing.

The Eastern District's "Caseload Crisis" and What It Actually Means

The Eastern District of California's rebuke of Ardestani invoked the court's "longstanding caseload crisis" — characterizing the time spent investigating her AI misconduct as a waste of limited judicial resources. This framing deserves scrutiny.

The Eastern District of California is genuinely overburdened. For years, the district has operated with a severe shortage of active judges relative to its caseload — one of the most extreme judge-to-case ratios in the federal system. Cases take longer to resolve. Litigants wait longer for hearings. Justice is delayed in proportion to the court's capacity constraints.

In this context, AI tools that could help attorneys prepare better briefs faster — producing more accurate, more focused, better-organized legal arguments — would seem to be directly in the court's interest. An attorney who uses AI effectively and verifies its output could give the Eastern District cleaner filings that require less judicial processing time. An attorney who uses AI carelessly and submits hallucinated citations creates the opposite: a sanctions investigation, a show cause order, a round of responsive briefing, and now a State Bar proceeding.

The court's framing — AI creates waste for our overburdened docket — is accurate as applied to irresponsible AI use. It is the opposite of accurate as applied to AI use generally. The solution to AI-related waste in legal filings is not to eliminate AI; it is to ensure attorneys have the training and tools to use AI responsibly. Yet the Eastern District's response to Ardestani's misconduct was not to recommend better training. It was to sanction her.

The caseload crisis that the Eastern District cited as justification for its sanction is, in part, a crisis that better AI adoption could help address — by allowing attorneys to produce better work product more efficiently. But efficiency in attorney practice threatens the business model of legal practice, which is built on the billable hour. More efficient attorneys produce more work in less time, which means fewer billable hours, which means less revenue for the firms that employ them and the bar associations that represent them.

The sanctions regime solves the caseload crisis in the way the legal establishment prefers: by making AI expensive enough that attorneys avoid it, preserving the slow, expensive, traditional methods that generate the billable hours the profession's economics require.

Khalifeh's Defense and Why It Reveals the Standard's Impossibility

Khalifeh's written response to the court's Show Cause Order is worth examining as a document of professional self-preservation. "I independently verified the factual and legal accuracy of the content and confirmed that all arguments and authorities were appropriate to the issues presented," he wrote. He is asserting that he did what the professional responsibility framework requires: he used AI, and he verified the output.

The court found his verification inadequate. One citation was nonexistent. Two had only "tenuous" relevance. Khalifeh eventually withdrew the nonexistent citation after repeated pressure.

The implicit standard the court is applying is total accuracy: every AI-generated citation must be verified to the point of absolute correctness. But what does "adequate verification" mean in practice? Reading every case cited in a brief — including every case cited in every string citation — and confirming not just that it exists but that it says what the brief claims it says and that it is sufficiently relevant to the argument at hand is not supplementary to the work of drafting a brief. It is equivalent to redoing the research from scratch without AI assistance.

This is, in fact, the point. The verification standard being applied to AI-assisted legal work is a standard that, if actually implemented, makes AI use more burdensome than non-AI practice. The cost of compliance is designed to exceed the productivity benefit of the tool. Attorneys who calculate their verification obligations honestly will conclude that using AI and meeting the courts' expectations requires more time than not using AI at all.

Meanwhile, judges using Learned Hand to draft rulings are not required to independently verify every case cited in the AI-generated draft against the underlying opinions to confirm accuracy and relevance. They review the draft. They edit as appropriate. They issue the ruling. The standard for judicial AI use is one of reasonable professional judgment. The standard for attorney AI use is one of absolute accuracy or professional destruction.

Khalifeh said he verified the citations. The court said his verification was insufficient. Six misconduct charges were filed. Disbarment is on the table. The standard is being enforced in a way that makes compliance functionally impossible without eliminating the tool entirely.

The New Mexico Echo: AI Mistakes Are Everywhere, Accountability Is Selective

The California cases are not occurring in isolation. The Albuquerque Journal reported this week that New Mexico judges are seeing AI mistakes creep into legal cases across the state — and that at least one attorney paid ,640 in sanctions after a judge determined his filings were afflicted by AI hallucinations. New Mexico joins California, Oregon, Indiana, Alabama, Nebraska, and the Sixth Circuit in the Q1 2026 AI sanction wave that has totaled at least ,000 in penalties.

The Illinois Attorney Registration and Disciplinary Commission published a piece this week — titled "Paste in Haste: The Fallout of AI Hallucinations" — noting that the pattern of attorney sanctions for AI-generated fake citations has become so common that it brings a "new headline" essentially every week. The IARDC analysis frames the issue as professional responsibility: attorneys have obligations to their clients and to courts that AI tools cannot satisfy on their behalf.

What neither the IARDC nor the California State Bar has addressed is the structural question that the proliferation of these cases makes impossible to avoid: if the professional responsibility framework is generating a new AI sanction headline every week, and if the rate of AI hallucination cases is accelerating despite record penalties, what does that tell us about the framework's design?

The California State Bar's answer is more charges, more suspensions, more disbarments. The New Mexico courts' answer is more sanctions. The Illinois ARDC's answer is more continuing legal education. Nobody's answer is: mandatory AI training as a condition of bar admission, institutional support for verification workflows, and accountability frameworks that apply equally to the judges who are using the same tools.

Nobody's answer is Learned Hand with disclosure requirements.

What California's State Bar Is Really Protecting

The State Bar of California is a mandatory membership organization. Every attorney licensed to practice law in California must be a member. Annual dues support the bar's operations, including its enforcement apparatus — the investigators, attorneys, and administrative staff who process misconduct complaints and prepare charges for presentation to the State Bar Court.

The bar's stated mission is to protect the public from attorney misconduct. When Chief Trial Counsel Cardona says that "courts and clients must be able to trust that filings are accurate," he is speaking in the language of public protection. The disciplinary machinery is deployed in the name of the clients whose interests are supposedly served by attorney competence standards.

But the clients of Ardestani, Khalifeh, and Romeyn are not well-served by the sanction campaign in any direct sense. Ardestani's wage-and-hour class-action clients do not receive compensation for the Eastern District's wasted judicial resources. Khalifeh's trademark client does not win its case because Khalifeh is charged with misconduct. Romeyn's personal injury client does not receive a better outcome because his attorney faces disbarment for imperfect citation verification.

The sanctions serve the institution, not the clients. They serve it in two ways. First, they enforce the principle that AI cannot be trusted — a principle that preserves the scarcity of legal expertise that justifies the billable hour. An attorney who uses AI effectively and reliably would be able to provide more legal services, more quickly, at lower cost. This would benefit clients. It would reduce the revenue per attorney-hour that law firm economics require. The sanction campaign makes sure that reliably effective AI use in legal practice is a professional impossibility, by ensuring that any AI error — however small, however quickly corrected, however inconsequential to the actual outcome of the case — triggers a process that can end a career.

Second, the sanctions protect the institutional authority of the courts and the bar over the definition of legal competence. If AI can produce acceptable legal work without the supervision of a licensed attorney, the licensing system is less valuable. The bar's authority derives from the monopoly it holds over the right to practice law. That monopoly is secure as long as competent legal work requires human legal training. AI threatens the monopoly not by being perfect but by being good enough — good enough that clients might prefer it to expensive human alternatives, good enough that the legal work it produces could satisfy legal needs without a bar card.

The sanction campaign does not eliminate this threat. But it makes pursuing it professionally dangerous, and it ensures that the attorneys who would most benefit from AI — the ones who cannot compete with BigLaw on price and who need AI to close the productivity gap — are the ones most exposed to career-ending consequences when the technology falls short.

The Profession's Reckoning Is Coming, and California Is Showing Which Side It Has Chosen

The convergence of events in California this spring — State Bar charges against three attorneys for AI misconduct, judicial AI adoption through Learned Hand — is not a local story. It is a preview of how the legal profession intends to manage the most disruptive technology in its history.

The management strategy is simple: let the institutions use AI and punish the practitioners who do. Courts adopt AI tools to increase their own efficiency while imposing accountability frameworks on attorneys that make AI use professionally dangerous. Law firms deploy AI governance programs that protect the firm from sanction exposure while leaving individual associates to navigate verification obligations alone. Bar associations issue ethical guidance that sets standards no training program is required to help attorneys meet. State bars file misconduct charges against attorneys who fail to meet standards that are, in practice, impossible to meet without either eliminating AI or hiring additional staff to verify AI output.

The attorneys facing charges today — Khalifeh, Romeyn, Ardestani — are not the problem. They are the symptom. They are practitioners who reached for a tool that every legal technology company in the country was telling them would increase their efficiency, and they learned, the hard way, that the profession's disciplinary apparatus was prepared to destroy their careers for using it imperfectly.

Meanwhile, in courtrooms across California, Learned Hand is drafting rulings. In 10 states, the judges who will decide whether attorneys' AI-generated citations are accurate enough are themselves using AI-generated summaries of the motions those citations appear in. The attorneys are required to certify the accuracy of their AI output. The judges are not required to certify anything.

George Cardona says that technology does not replace an attorney's duty of competence. He is right. It also does not replace a judge's duty of competence. The California State Bar has not filed charges against any judge for using Learned Hand without adequate verification of its AI-generated summaries. It has not issued guidance about the disclosure obligations that attach to AI-drafted rulings. It has not established a process for litigants who receive rulings to discover whether AI was used in their preparation.

These things will come. When they do — when the institutional AI adoption that is already underway in California's courts becomes undeniable enough to trigger the same transparency questions that are being applied to attorneys — the legal establishment will be forced to explain why the same technology that justifies disbarment for Khalifeh was appropriate for judicial use all along.

The answer will be revealing. The legal profession has always been better at writing rules for others than at applying them to itself.


Sources and Citations

  • Los Angeles Times / Harter, C. (Apr. 13, 2026). "Attorneys used AI to write court filings, cited fake legal decisions, State Bar alleges." latimes.com
  • Los Angeles Times / Tchekmedyian, A. (Mar. 18, 2026). "AI pilot program in L.A. County courts will help judges craft rulings in some cases." latimes.com
  • Governing.com. (Mar. 18, 2026). "Los Angeles Courts Pilot AI Tool to Help Judges Draft Rulings." Learned Hand deployed in 10 states. governing.com
  • ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures." complexdiscovery.com
  • Illinois Attorney Registration and Disciplinary Commission (IARDC). (Apr. 13, 2026). "Paste in Haste: The Fallout of AI Hallucinations." iardc.org
  • Albuquerque Journal. (Apr. 12, 2026). "NM judges see AI mistakes creeping into legal cases." abqjournal.com
  • Northwestern University / Sedona Conference Journal. (Mar. 2026). "Federal judges report broad adoption of AI tools." Survey: 61.6% of federal judges use AI in judicial work; 45.5% received no training. news.northwestern.edu
  • State Bar of California. (2026). Notices of Disciplinary Charges: Omid Emile Khalifeh, Steven Thomas Romeyn. Stipulation re Discipline: Sepideh Ardestani (approved Apr. 6, 2026).
  • U.S. District Court for the Central District of California. Standing Order on Use of Generative AI (effective Jan. 28, 2025).