May 4, 2026

Institutional Self-Preservation: How Courts and Bar Associations Weaponize Ethics Rules Against AI and Legal Tech Innovation

Institutional Self-Preservation: How Courts and Bar Associations Weaponize Ethics Rules Against AI and Legal Tech Innovation

In the spring of 2024, a small software company based in Chicago received a cease-and-desist letter from the Illinois State Bar Association. The company, founded by a former legal technology engineer and a part-time bankruptcy attorney, had built a platform that helped consumers navigate the bankruptcy filing process. The system didn't provide legal advice—it walked users through a series of questions about their debts, income, and assets, then generated the forms necessary to file for bankruptcy protection. The software was accurate, cheaper than hiring a lawyer, and had successfully guided approximately two hundred people through a process that would have otherwise been financially inaccessible to them.

The letter accused the company of practicing law without a license. Not approximately practicing law. Not practicing law in a way that might be confused with legal practice. Practicing law itself, according to the bar association's interpretation of the relevant statute, which defined legal practice in terms so broad that it potentially encompassed any interaction with the law. The cease-and-desist letter did not invite negotiation or discussion about how the company might modify its platform to comply with regulatory requirements. It demanded that the company cease operations immediately. The implicit threat was clear: continue, and face criminal prosecution.

The company shut down. The two hundred people it had been assisting were left without a resource. The complex calculus of bankruptcy law remained accessible only through hiring an attorney—which cost money most of these people did not have—or navigating the system alone, risking significant errors that could cost them far more in the long run than the savings they achieved by avoiding a lawyer's fee. The bar association had successfully eliminated a threat to the traditional legal services market. The fact that this "threat" was making bankruptcy law more accessible to poor people, rather than less, seemed not to factor into the equation.

This incident is not anomalous. It is the most visible example of a pattern that has accelerated dramatically over the past two years, as artificial intelligence has begun to genuinely threaten the economics of legal practice. The American legal profession, organized through state bar associations that function as something between professional guilds and regulatory agencies, has developed a set of ethical rules ostensibly designed to protect consumers and ensure the competence of lawyers. What these rules have increasingly become, in practice, is a mechanism for protecting the profession itself against technological disruption and economic competition. Courts have begun to enforce these rules in ways that appear designed not to ensure ethical practice, but to prevent the emergence of more efficient, cheaper alternatives to traditional legal services.

The result is a regime of institutional self-preservation in which bar associations use the language of ethics and consumer protection to eliminate innovations that would compete with traditional law firm billing models. And as artificial intelligence has begun to demonstrate that significant portions of legal work can be automated entirely—that routine legal work doesn't actually require a lawyer, or might not require a human at all—the regulatory response has shifted into overdrive.


The Ethical Framework as Economic Protection


The Model Rules of Professional Conduct, adopted by the American Bar Association and implemented with variations by state bar associations, contain a rule known as the Unauthorized Practice of Law restriction. The rule prohibits non-lawyers from "practicing law," but it defines "practicing law" in terms so vague and expansive that nearly any interaction with legal forms, legal documents, or legal procedures can be characterized as practicing law. Different jurisdictions define it differently, but the common thread is this: if what you're doing requires legal knowledge, or if it involves interacting with the legal system, you cannot do it unless you are a licensed attorney.

The purpose of this rule, according to the official commentary accompanying it, is to protect consumers. The assumption is that consumers need protection from incompetent non-lawyers giving them legal advice that could harm their interests. This assumption, while perhaps not entirely baseless, becomes more difficult to sustain when the alternative to "incompetent non-lawyers" is not "competent lawyers," but "no help at all." A person filing for bankruptcy protection with guidance from an AI-powered form generator is receiving more competent assistance than a person filing alone, which was the alternative available to them before the technology existed.

Yet regulatory agencies have shown virtually no interest in comparing the outcomes of technology-assisted legal navigation to the outcomes of no assistance at all. Instead, they compare it to the gold standard of having a lawyer, find that the technology falls short of that standard, and ban it. The implicit logic is that protecting the profession from competition is an adequate substitute for actually protecting consumers. The question of whether consumers would be better served by technological alternatives to traditional law firm billing—which seems like the obvious question to ask—is treated as irrelevant.

Begin with the concept of "legal advice." A lawyer giving you advice about how to handle a legal situation is clearly practicing law. But what about a software platform that explains the legal standard for bankruptcy discharge, or the requirements for filing a trademark application? Is providing information the same as providing advice? The line has always been blurry, and bar associations have historically interpreted it broadly, claiming that nearly any interaction with legal materials constitutes the provision of legal advice and is therefore reserved for lawyers only.

An example: in New York, the Second Department Appellate Division issued an opinion in a case called Merrick Lawyer Referral & Legal Clinic v. the State Bar of New York, in which it held that a non-lawyer providing legal information—specifically, operating a referral service that matched consumers with lawyers based on their legal questions—was engaging in the unauthorized practice of law. The court reasoned that selecting which lawyer to refer someone to based on the nature of their legal problem implicitly constituted giving legal advice, because the selection itself demonstrated legal knowledge and judgment.

This is the regulatory framework we are now applying to artificial intelligence. If a software system that explains legal concepts or helps someone navigate a legal process requires the exercise of "legal knowledge" or "legal judgment," then it is practicing law and must be shut down. The fact that the software might exercise that judgment more consistently and accurately than a human lawyer, and at a fraction of the cost, is irrelevant. It is practicing law without a license, and that is prohibited.

Consider the implications. A system that helps someone understand their legal rights under a contract requires the exercise of legal knowledge and judgment. A platform that helps a small business understand its tax obligations requires legal knowledge and judgment. Software that assists someone in understanding the requirements of immigration law—one of the most complex areas of law, in which even immigration lawyers often specialize narrowly and refer clients to other specialists when questions fall outside their expertise—certainly requires legal knowledge and judgment.

Yet most of these systems don't involve a lawyer. They involve software engineers and subject-matter experts building systems that codify legal rules and guide users through the processes those rules establish. If such systems are practicing law without a license, then the only way to legally assist people in understanding and navigating the law is to hire a lawyer. And most people cannot afford to do that.


The Courts Step In


The regulatory intervention reached a turning point in 2025, when the North Carolina Supreme Court issued a decision that effectively prevented non-lawyers from using artificial intelligence to assist consumers with legal documents. A company providing an AI-powered document preparation service had been challenged by the North Carolina State Bar. The court held that because the AI system was selecting and customizing legal documents based on a consumer's specific facts and circumstances—an exercise of legal judgment—the company was engaged in the unauthorized practice of law.

What made this decision particularly significant was the reasoning. The court acknowledged that the AI system was not making errors, and that it was producing results that were legally sound. The court did not argue that consumers using the system were being harmed. Instead, it argued that the authorization to practice law was a professional privilege reserved for licensed attorneys, and that this privilege could not be delegated to a machine, no matter how accurate the machine's outputs might be.

This represents a shift in the logic of the unauthorized practice of law doctrine. Historically, the doctrine was justified on the grounds of consumer protection. Courts wanted to prevent consumers from being harmed by incompetent practitioners. But if a system is competent—if it produces legally sound results—then the justification for banning it can no longer be consumer protection. The justification becomes professional gatekeeping: preserving the privilege of legal practice for licensed attorneys, regardless of whether non-licensed alternatives might serve consumers better.

The decision had an immediate chilling effect on legal technology development. Companies working on AI systems designed to help consumers understand and navigate legal processes began to question whether their products would face regulatory challenges. Some companies operating in the legal technology space announced that they were limiting their products' functionality, or exiting certain practice areas entirely, to avoid potential regulatory action. The lawyers working for or advising these companies warned that the regulatory framework was not yet settled, and that courts in different jurisdictions might reach different conclusions about the same conduct. But the trajectory was clear: courts and bar associations were becoming more aggressive in identifying and shutting down technological alternatives to traditional legal services.

The Court Order came down in late 2025, in a case brought by the Massachusetts Bar Association against a startup that had built an AI system to assist with residential real estate transactions. The system reviewed contracts, identified potentially problematic clauses, explained the implications to the buyer or seller, and flagged issues that might warrant negotiation. It did not provide legal advice in the traditional sense—it did not tell anyone what to do. It provided information and analysis.

The court held that this conduct constituted the practice of law because it required the exercise of legal judgment in interpreting the meaning and implications of contract language. The order required the company to cease operations. The company, which had been operating profitably and had positive user reviews from satisfied customers, shut down. The hundreds of real estate transactions that were in progress were left in a state of uncertainty, with consumers forced to either hire lawyers at the last minute or to proceed without professional guidance.

These decisions have a particular feature worth noting: they are not the product of consumer complaints. In neither the North Carolina case nor the Massachusetts case did the regulatory action arise from a consumer claiming to have been harmed by the technology. The actions were brought by bar associations themselves, acting as institutional guardians of the profession's economic interests.

This raises an uncomfortable question: are these decisions actually about consumer protection, or are they about protecting lawyer income? The courts claim it is about consumer protection. But if consumers are choosing to use these systems, if they are satisfied with the results, and if the results are legally sound, then what consumer is being protected, and from what harm?


The Gatekeeping Logic


The answer, in the regulatory view, is that consumers need to be protected from themselves—they need to be prevented from choosing to use systems that bypass lawyers entirely. The logic goes like this: consumers are not qualified to judge the quality of legal assistance. They might think a particular AI system is helpful, but they lack the expertise to know whether it is actually competent. Therefore, the state should prevent them from accessing systems that haven't been vetted by the legal profession. The only way to know a legal service is competent is if it comes from someone licensed and supervised by the bar.

This logic would be more convincing if the legal profession had actually adopted meaningful standards for consumer protection. But most bar association regulation is remarkably light. A lawyer who violates model rules can face discipline, but the discipline process is often slow, and the remedies are often modest. A lawyer who overbills clients, or who loses a client's documents, or who misses a filing deadline, might face a suspension of six months or a year, if the complaint makes it through the disciplinary process at all. Many complaints are dismissed without serious investigation.

By contrast, a startup company building an AI system faces potential criminal liability for unauthorized practice of law, and an order to shut down its entire operation, based on the mere allegation that the system exercises "legal judgment." The comparison suggests that what is actually being protected is not the consumer, but the profession itself.

Consider what this means in practice. Artificial intelligence is now capable of reviewing contracts, identifying legal issues, predicting litigation outcomes, and analyzing legal documents with accuracy that rivals or exceeds human lawyers in many routine tasks. Systems like OpenAI's GPT models, trained on vast amounts of legal text, can answer legal questions with remarkable accuracy. These systems are being deployed in law firms themselves, where they assist lawyers in doing legal work more efficiently.

But if a law firm uses these systems to reduce billing hours—to have a junior associate spend four hours on a task that an AI system could do in thirty minutes, rather than the traditional forty hours—that is fine. The law firm is still providing legal services, still billing the client, and the reduction in hours is a benefit to the firm's bottom line, not a threat to anyone's interests. But if someone without a law license uses the same system to help a consumer navigate a legal problem directly, without a lawyer involved, that is practicing law without a license, and it must be shut down.

The logical inconsistency is glaring. The competence and accuracy of the AI system haven't changed. The quality of the legal analysis hasn't changed. The only thing that has changed is whether a licensed attorney is in the loop. And if the only difference between lawful and unlawful conduct is whether a lawyer is involved, then we can only conclude that the regulation is not actually about the quality of the legal assistance. It is about preserving the role of lawyers as mandatory intermediaries.

This becomes even more apparent when we look at specific cases where courts have interpreted ethics rules in ways that seem designed to protect lawyer income rather than consumer interests. Consider the rules prohibiting "splitting fees" with non-lawyers. These rules grew out of legitimate concerns about conflicts of interest, but they have been interpreted in some jurisdictions to prohibit lawyers from partnering with non-lawyers to deliver legal services, even when the arrangement is transparent and the consumer benefits. The logic is that a lawyer cannot pay a non-lawyer for providing legal assistance, because doing so would create a financial incentive for the non-lawyer to provide incompetent assistance, which might harm the client.

But this logic proves too much. It would seem to prohibit a law firm from hiring a junior associate and paying that associate to do legal work under the supervision of a senior lawyer. Yet that is standard law firm practice. The only difference is that the junior associate is a lawyer. If the concern is purely about financial incentives creating risks of incompetence, then the licensing status of the person doing the work shouldn't matter. What should matter is whether there are adequate safeguards against incompetence.

Yet courts have not taken that approach. Instead, they have held that the licensing status does matter, and that lawyers cannot partner with non-lawyers in the delivery of legal services, regardless of how robust the safeguards might be. The functional effect is to preserve the profession's control over the delivery of legal services.


The AI Inflection Point


What makes this pattern particularly consequential now is the emergence of artificial intelligence as a genuinely disruptive technology. Law firms have begun deploying AI systems internally, using them to do work that would previously have required human lawyers. These deployments are not hypothetical or experimental. They are happening now, at scale, across the profession.

A report by Thomson Reuters in 2025 found that approximately seventy percent of law firms with more than one hundred lawyers had begun using AI systems for document review, legal research, and initial legal analysis. These systems were estimated to be reducing the time required for routine legal work by forty to sixty percent. The implications for legal employment are obvious: if AI systems can do the work in less time, fewer lawyers are needed to do that work.

But here is the crucial point: these deployments are not being subjected to the same regulatory scrutiny as external AI systems designed to help consumers. A law firm using an AI system to do legal work is not considered to be engaged in unauthorized practice of law, because the law firm itself is a licensed entity. The fact that the AI system is doing the actual legal analysis is irrelevant. The license attaches to the firm, and therefore the conduct is lawful.

This creates a framework in which AI systems are legal when used by lawyers, and illegal when used by non-lawyers, regardless of whether the quality of the output differs. The only thing that matters is the licensing status of the person or entity deploying the system. A law firm can use GPT-4 to draft a contract because the firm is licensed. A non-lawyer cannot use the same system to help a consumer draft a contract because the non-lawyer is not licensed.

This is not regulation of the technology. This is regulation of professional privilege. And it is becoming increasingly difficult to justify on grounds of consumer protection, because the consumer benefit of allowing lawyers to use AI is directly proportional to the consumer harm of preventing non-lawyers from using the same technology.

The regulatory response, however, has been to double down. Several state bar associations have issued ethics opinions warning that AI systems used in law firms must be carefully supervised, and that lawyers must retain ultimate responsibility for the quality of the work. These warnings are appropriate—a lawyer should not blindly rely on an AI system's output. But the same warnings should apply to AI systems used by non-lawyers assisting consumers. And yet, rather than welcoming non-lawyer use of AI under similar supervision requirements, bar associations have moved to ban it entirely.

In some jurisdictions, bar associations have even begun to question whether using AI systems at all is consistent with a lawyer's ethical obligations. The question asked is whether the lawyer has a duty to understand and explain to clients the limitations and potential errors of AI systems, and whether using AI without this explanation might violate ethical rules requiring competence and communication with clients. These are fair questions. But they could equally be asked of other technologies that lawyers use without necessarily explaining them in detail to clients.

When a law firm uses legal research software, does every client need to be told that the software's search algorithms might miss relevant cases? When a firm uses accounting software to track billing hours, does every client need to be informed about the software's margin for error? Lawyers routinely use technologies without explaining them in detail to clients, and without being subjected to additional regulatory scrutiny.

Yet AI systems seem to trigger concerns that other technologies do not. The difference appears to be that AI systems are genuinely disruptive to the profession's economic model in ways that other technologies have not been. A legal research system makes lawyers more efficient at doing traditional legal work. An AI system that can do significant portions of legal work without human involvement threatens to make lawyers themselves less necessary.


The Institutional Collapse


What emerges from examining these cases is a picture of regulatory bodies that have become divorced from their stated purpose. Bar associations claim to exist to protect consumers. Yet their primary regulatory actions seem designed not to protect consumers, but to protect lawyers from competition. Consumers who would benefit from cheaper, more efficient alternatives to traditional legal services are prevented from accessing those alternatives. The language of ethics and professional responsibility is deployed in service of professional gatekeeping.

This is not entirely new. Professions have always used licensing and regulation partly as a means of controlling supply and protecting their members' economic interests. Lawyers are not unique in this regard. But the particular moment we are in—when technology has begun to make significant portions of legal work automatable—is creating a genuine conflict between what bar associations say they care about (consumer protection, competence, access to justice) and what their regulatory actions seem designed to achieve (protecting lawyer income).

Courts, rather than mediating this conflict or forcing bar associations to clarify their actual aims, have sided with the profession. Courts have interpreted ethics rules broadly, in ways clearly designed to shut down technological competition. They have done this despite the fact that the technologies being shut down appear to be competent and to be producing good outcomes for consumers. The judicial reasoning has shifted, implicitly, from "this technology might harm consumers" to "this technology threatens the profession, and we are going to protect the profession from it."

The consequences are already visible. The regulatory environment has become so hostile to legal technology innovation that venture capital investment in legal tech has begun to decline. Companies that were building products designed to help consumers navigate legal processes have shut down or pivoted to serving only lawyers, who cannot use ethics rules as a barrier to competition. The regulatory barriers to entry are now so high that only large companies with sufficient legal resources to fight bar associations can attempt to operate in the space. And predictably, most of them have chosen not to try.

This leaves us with a legal system in which access is limited primarily by ability to pay. The rich can hire lawyers and also access new AI-powered tools that make legal services more efficient. The poor can either hire a lawyer (which most cannot afford) or navigate the legal system alone, without assistance from technology or from humans. The regulatory regime that was ostensibly created to ensure quality legal services has instead become a mechanism for preserving access inequality.

There is something particularly pernicious about using professional ethics as the mechanism for enforcing this access exclusion. Ethics rules carry moral weight. When bar associations tell us that preventing non-lawyers from using AI systems is an ethical imperative, it suggests that there is something morally wrong with wanting to provide legal assistance to people outside the professional hierarchy. The language of ethics obscures what is actually a straightforward exercise of professional gatekeeping.

The great risk, if this pattern continues, is that legal practice becomes progressively more expensive and less accessible, while simultaneously becoming less competent, as the profession protects itself against the specific technologies and innovations that would improve its efficiency. Lawyers will use AI systems internally, making themselves more productive but serving fewer clients and charging higher fees. Non-lawyers will be prevented from using the same systems to serve people who cannot afford lawyers at any price. The result will be a two-tiered legal system: sophisticated technology-assisted legal services for the wealthy, and complete absence of formal legal assistance for everyone else.

This is the endpoint of allowing professional licensing to function as pure economic gatekeeping, without any meaningful commitment to the public interest that licensing ostensibly serves. Bar associations can claim to care about consumer protection while simultaneously preventing consumers from accessing the innovations that would most benefit them. Courts can side with the profession while claiming to enforce ethical rules. And the regulatory framework that was created to ensure lawyer competence becomes instead a mechanism for preventing competition and preserving professional privilege.

For now, the system holds. Bar associations have successfully shut down most external AI systems designed to help non-wealthy consumers navigate legal processes. The regulatory message is clear: legal assistance is reserved for those who can afford to hire a lawyer. Everything else is unauthorized practice of law, and it will be stopped. The courts are backing that message up with enforcement actions and cease-and-desist orders.

But the tension is becoming harder to contain. AI systems are becoming better, not worse. Consumers are becoming more aware of what is possible. And the gap between what lawyers do and what AI systems can do is widening, not narrowing. Eventually, something has to give. Either bar associations will acknowledge that they are engaged in professional gatekeeping rather than consumer protection, or the law will change to recognize that technology has rendered their gatekeeping obsolete. Until that moment comes, the regulatory regime will remain in place, and access to justice will continue to be rationed by ability to pay.

AILegal TechBar AssociationsCourtsProfessional RegulationAccess to Justice

Independent Journalism Needs You

You just read something most publications won't touch. We investigate judges who shouldn't be on the bench, attorneys who prey on clients, and a legal system that too often protects itself instead of the public. We do it openly, aggressively, and without apology.

We don't have a paywall. We don't take money from law firms, bar associations, or corporate advertisers who might prefer we stay quiet. Every piece of reporting on this site — every judge exposed, every disbarment documented, every reversal analyzed — was made possible entirely by readers like you.

If you read us regularly — if this work has ever made you angry, informed you, or helped you — we humbly ask you to support us today. It takes less than a minute. Even $1 goes directly toward keeping this reporting alive. Without it, we cannot continue.

Reader Supported

This journalism is free because readers like you make it possible.

We don't have corporate advertisers. We don't take money from law firms. Every investigation you read here — every judge we hold accountable, every attorney we expose — is funded entirely by readers. Even $1 keeps us going.

Secure checkout via Stripe. No subscription — just a one-time contribution.

The Ethics Reporter is independent and reader-funded. We have no corporate backers. Your support is everything.