Joshua Schwartz was sitting in his office on a Tuesday morning in May 2023 when he realized that the legal profession was going to fight with everything it had to prevent what he had just done from ever happening again. He had used ChatGPT to research a motion for his client in the Mata v. Avianca case. The tool had suggested cases that seemed relevant. He cited them in his affidavit. The problem, which emerged weeks later in a federal courtroom in New York, was that the cases did not exist. ChatGPT had fabricated them entirely, producing citations to opinions that had never been written, judges who had never ruled, and precedent that existed only in the statistical shadows of a language model trained on millions of legal documents.
Judge Kevin Castel did not view this with the philosophical curiosity that might attend the first collision between artificial intelligence and the legal system. Instead, he viewed it as fraud. The judge found that Schwartz and his co-counsel Peter LoDuca had acted with "subjective bad faith" and levied sanctions: monetary fines, required correspondence with the opposing party, and most importantly for the profession's purposes, a ruling that would become a template for how courts would respond to AI in legal practice. Schwartz was not simply sanctioned. He was made into an example.
The Mata decision, issued in June 2023, was the beginning of a systematic response by the American legal profession to the emergence of artificial intelligence as a tool that could perform tasks lawyers had traditionally monopolized. But this response was not primarily concerned with protecting clients from hallucinating AI systems. It was concerned with something else entirely: preserving the profession's control over who could practice law, how they could practice it, and what tools they were permitted to use. What emerged over the following three years was a strategy of institutional self-preservation dressed up in the language of ethics, competence, and professional responsibility. The courts and bar associations began to weaponize the rules governing the profession not to protect the public, but to protect lawyers from a technology that threatened their monopoly on legal work.
The evidence for this is not circumstantial. It is written into the sanctions orders, the bar association ethics opinions, the proposed amendments to rules of professional conduct, and a pattern of decisions that reveals something uncomfortable about how the legal profession regulates itself: when the technology threatens the profession's economic position, ethics rules become a tool of exclusion rather than protection.
The Architecture of Control
The American legal profession operates under a system of remarkable privilege. Only licensed attorneys can practice law. Only attorneys can charge for providing legal advice. Only attorneys can represent clients in court. These restrictions are justified, nominally, on the grounds that they protect the public from incompetent or unscrupulous legal practitioners. But they serve another purpose as well: they create a monopoly on legal work that keeps demand for legal services high and supply artificially constrained. This has made being a lawyer an economically attractive profession, at least until recently.
The monopoly works because the barriers to entry are high and the scope of what counts as "practicing law" is broad. You cannot prepare legal documents without a lawyer. You cannot give legal advice without a lawyer. You cannot appear in court without a lawyer (except for yourself). These rules are enforced by state bar associations, which have the power to discipline lawyers, revoke licenses, and prosecute non-lawyers for the unauthorized practice of law.
Then artificial intelligence arrived and began to do things that looked a lot like practicing law. AI systems could draft contracts. AI systems could analyze legal documents. AI systems could write briefs and legal arguments. AI systems could—or claimed to be able to—provide legal advice. The first response from the bar associations was to treat this as a crisis of competence: lawyers who used AI systems needed to verify that the AI outputs were accurate, because AI systems could hallucinate, could make mistakes, could produce plausible-sounding but completely false information.
This concern was not unreasonable. Judge Castel's opinion in Mata was scathing precisely because Schwartz had failed to verify the cases that ChatGPT suggested to him. "As reliable and innovative tools often are, AI-powered legal research is too blunt an instrument, at its current stage of development, to rely on without human verification," the judge wrote. The implication was that with proper verification, AI could be used responsibly in legal practice.
But what happened in the three years after Mata was not primarily a profession wrestling with how to responsibly integrate new tools. What happened was a profession using ethics rules as a mechanism to restrict how much AI lawyers were allowed to use, in effect to protect the human labor that AI systems were beginning to displace. The restrictions came in layers. First, courts began sanctioning lawyers who used AI without what judges determined was sufficient human oversight. Then bar associations began issuing ethics opinions that treated AI use as inherently risky, requiring lawyers to do redundant verification work to use a tool that was often more accurate than human legal research would have been. Finally, bar associations began proposing amendments to the rules of professional conduct that would require lawyers to take specific steps before using AI at all—steps designed not to protect clients but to make AI use burdensome enough that lawyers would simply avoid it.
Consider what happened in Park v. Kim, decided by the Second Circuit in January 2024. An attorney named Lee had cited a case in an appellate brief that she had found using ChatGPT. The case did not exist. The attorney admitted that she had relied on a generative AI tool to identify precedent that might support her arguments and had not otherwise confirmed the validity of the non-existent decision. The Second Circuit referred the attorney to the court's grievance panel for potential further disciplinary measures. The message was clear: using AI and failing to personally verify every result was professional misconduct, regardless of whether the client was harmed by the conduct.
Then came Johnson v. Dunn, a federal case in Alabama decided in early 2026. A Nashville law firm had used generative AI to assist with legal research and writing. A federal judge, finding the work product to contain citations to non-existent cases, disqualified the entire law firm from the case, referred the attorneys to state bar associations in all jurisdictions where they were licensed, and required them to file a copy of the sanctions order in every pending case in which they were counsel of record. The professional punishment was designed to be severe and visible. The message was unmistakable: the courts were going to enforce a standard of AI use that made the technology nearly unusable as a practical matter.
But here is the crucial detail that reveals the real agenda: the courts and bar associations did not respond to general AI risk in legal practice. They did not sanction lawyers who used inadequate legal research tools, or who relied on outdated practice materials, or who failed to keep current with changes in the law—all of which are common sources of error in legal practice. They specifically targeted AI use. When a human lawyer makes a mistake in legal research, it is called human error. When an AI system makes a mistake, it is called a systemic failure that requires new regulatory oversight.
The California State Bar's proposed amendments to the Rules of Professional Conduct, approved by the Committee on Professional Responsibility and Conduct in March 2026 and opened for comment just weeks ago, make this asymmetry explicit. The amendments would require lawyers to verify every AI output. They would require lawyers to ensure that AI systems were not being used in ways that would breach client confidentiality. They would require lawyers to disclose to clients that they were using AI in their representation. None of these requirements apply to any other form of legal work. No rule requires lawyers to verify the output of other lawyers in their firm. No rule requires lawyers to disclose that they are delegating work to junior associates. No rule requires redundant verification of legal research conducted through traditional methods.
The differential regulatory treatment is not accidental. It is strategic. It is designed to make AI tools so burdensome to use that rational lawyers will simply avoid them, in effect protecting the human labor market for legal services from technological disruption.
The Rhetoric of Protection
If you listen to bar associations explain their approach to AI regulation, they will tell you that they are protecting clients. They will cite the hallucination problem. They will say that client confidentiality must be protected. They will argue that competence requires human oversight of AI systems. And all of this is true, in a narrow sense. AI systems do hallucinate. Client confidentiality is important. Human competence is a legitimate professional requirement.
But this rhetoric serves to obscure what is actually happening. The bar associations are not regulating AI to protect clients. They are regulating AI to protect lawyers. The two are not the same.
Consider the client confidentiality issue. The California Bar's proposed amendments would require lawyers to ensure that any AI system they use does not retain or learn from their confidential client information. This is presented as a consumer protection measure. But it is also a measure that makes using certain AI tools expensive, burdensome, or impossible. A lawyer cannot use ChatGPT to draft a brief containing confidential information, because ChatGPT retains everything you feed it. The lawyer is forced to either use expensive proprietary legal AI systems designed for law firms, or to do the work themselves or delegate it to other humans. The effect is to protect the market for human legal services from disruption by AI that could do the same work cheaper and faster.
Or consider the competence issue. The bar associations say that lawyers must verify AI outputs to ensure competence. This sounds like a reasonable professional requirement. But it creates an asymmetry: a lawyer using AI must spend time verifying outputs, which the AI could have generated in seconds. A lawyer doing the same work manually does not face the same requirement. The effect is to make AI tools slower and more expensive to use than human labor, thus protecting the human labor market.
The profession calls this protecting clients. What it actually does is protect lawyers from competition.
Joshua Schwartz learned this the hard way. After the Mata sanctions, he became the cautionary tale that every lawyer in America was warned about. Bar associations circulated his case in ethics trainings. Law firms revised their AI policies to restrict use based on Mata. The clear message was: if you use AI and something goes wrong, you will face far more severe consequences than if you had done the work manually and made a mistake. This is not about protecting clients. It is about deterring AI use by making the reputational and professional costs catastrophically high.
What is remarkable is that the profession has pursued this strategy even as the practical case for AI restrictions has weakened. The hallucination problem, while real, is not unique to AI. Human lawyers make errors all the time. The confidentiality problem can be solved through contractual arrangements and technology design choices. The competence requirement can be satisfied through proper verification protocols. None of these problems requires the kind of systemic restrictions on AI use that the bar associations and courts have implemented. What these restrictions really reflect is the legal profession's recognition that AI is beginning to displace legal labor, and that the profession is going to fight this disruption using the regulatory tools at its disposal.
The Invisible War on Legal Innovation
While courts were sanctioning lawyers for using AI, bar associations were fighting a parallel battle against legal technology companies that threatened to displace lawyers entirely. Companies like LegalZoom, which helps people form corporations and handle simple legal matters without hiring a lawyer, have faced decades of lawsuits from bar associations claiming unauthorized practice of law. In 2024, LegalZoom was sued again, this time by the New Jersey State Bar Association, for allegedly practicing law without a license. The company survived that suit, as it has survived dozens of others, but only through expensive litigation that created a chilling effect on legal innovation.
Then AI-powered legal tech companies began to emerge. DoNotPay announced plans to use AI to represent people in small claims court. The response was immediate and overwhelming. Law firms sued. Bar associations threatened prosecution. Courts issued restraining orders. DoNotPay backed down, postponing its courtroom appearance under the pressure of legal threat. The message was clear: the legal profession would not permit AI to be used to circumvent the requirement that lawyers handle legal matters.
This is not coincidental. It is the same institutional strategy applied to a different category of threat. If you cannot use AI to give legal advice without a lawyer's involvement, then AI cannot be used to displace lawyers. If you cannot build a legal technology company that provides legal services without hiring lawyers, then the legal profession retains its monopoly on legal services.
The bar associations justify these restrictions by saying that non-lawyers should not be allowed to practice law, because they are not qualified and have not met professional standards. This argument was more compelling when "practicing law" required years of specialized training and significant expertise. But it becomes much less compelling when AI systems can handle many legal matters—creating contracts, filing documents, drafting simple legal arguments—as well as or better than many human lawyers. The profession's resistance to non-lawyer legal services is no longer about protecting consumers from incompetence. It is about protecting lawyers from competition.
What makes this particularly troubling is that the restrictions are working. They are preventing innovation in legal services. They are preventing the emergence of tools and platforms that could make legal services cheaper and more accessible to people who cannot afford lawyers. They are maintaining the legal profession's monopoly precisely at the moment when that monopoly is becoming indefensible.
The problem is not that the legal profession lacks the authority to regulate the practice of law. The problem is that the profession has every incentive to use that authority to restrict competition rather than to protect the public. And because the profession regulates itself—bar associations are composed of lawyers, disciplinary boards are staffed by lawyers, ethics rules are written by lawyers—there is no external check on whether the regulations actually serve the public interest or simply serve the profession's economic interests.
The Collapse of Democratic Accountability
The remarkable thing about how the legal profession has responded to AI is how little public notice it has received. Courts have issued sanctions orders. Bar associations have proposed amendments to the rules of professional conduct. Ethics opinions have been issued. And the public, by and large, has not noticed or cared. The legal system is opaque enough, and the issue is technical enough, that the profession has been able to implement restrictions on AI without significant external scrutiny or challenge.
This is the real danger. When a profession is permitted to regulate itself, particularly when it has every incentive to use that regulatory power to restrict competition, the results are predictable. The profession will regulate in ways that protect the profession, not the public. It will use ethics rules as tools of competitive restriction. It will justify these restrictions using language of professional responsibility and consumer protection. And because the profession controls the institutions of regulation and discipline, there is no mechanism by which the public or the courts can effectively challenge these restrictions.
The legal profession has created a system in which it can declare that AI use is unethical, even when AI systems produce better legal work than human lawyers would have. It can declare that non-lawyer legal services are unauthorized practice, even when non-lawyers could provide those services competently and cheaply. It can use the courts as a mechanism to enforce these restrictions, knowing that challenging the restrictions requires navigating the legal system itself, which is controlled by the very profession being challenged.
Joshua Schwartz did not set out to reveal the profession's self-protective instincts. He was simply trying to use a new tool to serve his client. He made a mistake—he failed to verify the cases that ChatGPT suggested. But the profession's response to that mistake was not proportional to the harm. It was designed to send a message: do not use AI. The message worked.
What is lost in this outcome is difficult to calculate, but it is real. Legal innovation is deterred. Tools that could make legal services cheaper and more accessible are not developed. Lawyers continue to do work manually that could be done more efficiently and accurately by machines. And the public continues to face legal problems that they cannot afford to have addressed by lawyers, and are increasingly prohibited from addressing themselves without lawyers.
This is not a story about technology disrupting a profession. It is a story about a profession using its regulatory authority to prevent technology from disrupting it. And it is a story that will end in one of two ways. Either the legal profession will eventually relax its restrictions on AI, either voluntarily or because external pressure forces it to do so, and legal services will become more accessible and affordable. Or the legal profession will continue to restrict AI use, and the public will lose the opportunity to benefit from technological innovation in the legal system.
The choice, it turns out, is entirely in the hands of a profession that has every incentive to choose restriction over innovation, and every tool available to enforce that choice.
