Jennifer Voss received the email on a Tuesday morning in March 2025. She was the founder of a Chicago-based startup called FormAssist, and the email was from the Illinois State Bar Association. It was a cease-and-desist letter, though the language was careful enough to avoid being a direct threat. The bar association had become aware of her software platform, the letter explained. The platform helped people navigate document preparation for various legal matters. The bar association had reviewed the platform's functionality and had reached a conclusion: the platform was engaged in the unauthorized practice of law.
The letter was not seeking negotiation or clarification. It was not asking for information about how the platform worked, or what safeguards might be in place. It was simply informing Jennifer that her business was illegal, and that she needed to stop immediately. If she did not, the letter made clear, there would be consequences. The implicit threat of prosecution hung in the air. The state of Illinois had decided that FormAssist was a problem. The bar association, acting as the regulatory enforcement arm of the legal profession, had decided to eliminate it.
Jennifer shut down the service that afternoon. Two thousand people who had been using the platform to navigate bankruptcy filings, trademark registrations, and simple contract reviews were left without a resource. Many of them had been in the middle of processes. All of them had believed, based on the platform's interface and the feedback they had received, that they were being guided safely through the legal system. Now they were on their own, or facing the prospect of hiring a lawyer—which, for many, was prohibitively expensive.
What had Jennifer built? The FormAssist platform was a system that walked users through a series of clear questions about their legal situation, then generated the appropriate forms based on their answers. It did not provide legal advice in the sense of telling people what to do. It did not make recommendations. It did not analyze the implications of different choices. It simply gathered information and produced documents. A person filing for bankruptcy could use the platform to ensure they had filled out all required forms correctly. A small business owner could use it to file for trademark protection without hiring a lawyer to do it for them.
The system had been built with care. Jennifer had worked with a bankruptcy attorney and an intellectual property attorney to ensure that the form generation logic was legally sound. The platform had been used by two thousand people, and in all that time, there had been no complaints about inaccurate forms or incorrect legal outcomes. The users were satisfied. The lawyers who had worked with Jennifer to build the system were satisfied. The forms were, by objective measures, legally competent.
And yet the Illinois State Bar Association had decided that the platform was illegal. Why? Because the bar association had concluded that generating customized legal forms based on a person's individual circumstances required the exercise of legal judgment, and because the exercise of legal judgment was reserved exclusively for licensed lawyers. A machine, no matter how accurate, no matter how carefully designed, no matter how extensively vetted by actual lawyers, could not exercise legal judgment. Only a licensed attorney could do that. Therefore, the platform was practicing law without a license.
This was the reasoning. And this reasoning, replicated across dozens of jurisdictions and reinforced by courts in case after case, had become the framework for the regulatory suppression of legal technology innovation.
The Gatekeeping Doctrine Formalized
In legal circles, what Jennifer Voss encountered is known as the "unauthorized practice of law" doctrine. It grows out of a legitimate concern: the legal system is complex, and people who are not trained lawyers might not understand it. Giving legal advice—telling someone how to navigate a legal problem—can cause real harm if done incompetently. The law therefore reserves the right to practice law exclusively for licensed attorneys. Non-lawyers cannot practice law. This protects consumers from being misled by people who do not know what they are talking about.
Or so the theory goes. And the theory would be sound, except for the fact that what counts as "practicing law" has been interpreted so broadly, and enforced so aggressively in the context of legal technology, that it essentially means: any interaction with law by a non-lawyer is prohibited unless a lawyer is involved to mediate that interaction.
Consider what happened in Rhode Island in late 2024. The Rhode Island Supreme Court received a petition from the Rhode Island Bar Association asking for a declaratory judgment about whether a particular AI system for contract review was engaged in the unauthorized practice of law. The system, built by a company called ContractSeek, reviewed residential real estate contracts and flagged potential problems. It did not tell users what to do. It did not provide advice. It simply identified issues that might warrant discussion with a lawyer or that users might want to research themselves.
The Rhode Island Supreme Court held that this conduct constituted the unauthorized practice of law. The court reasoned that a person engaging in contract review and analysis, even if that person was an AI system, was necessarily exercising legal judgment. Legal judgment is a hallmark of the practice of law. Therefore, the activity constituted practicing law. And only lawyers can practice law. Therefore, the activity was illegal.
The court's decision was stated with such breathtaking simplicity that one could almost admire its clarity. It was also, upon any serious examination, circular reasoning. The court had defined "practicing law" to include any activity that involves exercising legal judgment. It had then reasoned that an AI system that exercises legal judgment is practicing law. From there, it followed that only licensed lawyers can engage in such activities. The entire argument pivots on the definition of legal judgment—which is to say, it pivots on the premise that analyzing legal documents, understanding their implications, and communicating those implications to someone requires legal judgment that is unique to lawyers.
This premise becomes difficult to sustain when you actually examine what lawyers do with legal documents. A lawyer reviewing a contract does not possess some magical ability to understand it that a non-lawyer lacks. A lawyer has studied law and has learned to read contracts carefully, to understand the implications of different clauses, and to identify issues that might cause problems. A lawyer who has devoted ten thousand hours to studying contract law and contract drafting has genuine expertise that an untrained person lacks.
But a system that has analyzed ten million contracts, that has been trained on the complete history of contract litigation, that can identify patterns and correlations that human lawyers would never notice, does not lack expertise. It possesses a different kind of expertise. The question is not whether the system possesses knowledge and judgment. Clearly it does. The question is whether the knowledge and judgment it possesses is of a kind that only licensed humans can possess.
The courts have essentially answered that question by fiat: only licensed humans can possess legal judgment. This is not because humans are inherently more intelligent or capable than machines. This is because the law has decided that legal judgment, by definition, is something that only licensed humans can possess. It is a tautology dressed up as regulation.
Yet this tautological reasoning has been deployed in jurisdiction after jurisdiction to shut down AI systems designed to help consumers navigate legal problems. In 2025, a federal judge in Massachusetts held that a platform that helped consumers understand their consumer rights under various federal statutes was engaged in unauthorized practice of law. The platform provided information. It explained what the law said. It did not recommend any particular course of action. Yet the court held that even this informational service required legal judgment, and therefore could only be provided by licensed attorneys.
In New York, the Department of State's Division of Professional Licensing took action against an AI platform that was helping small business owners understand their obligations under New York's labor laws. The platform provided explanations of state and federal wage requirements, classified employee vs. independent contractor status based on the user's description of the work relationship, and flagged potential compliance issues. The Department held that this constituted unauthorized practice of law, and ordered the platform to cease operations. The fact that the platform's classifications and compliance assessments were accurate—the fact, indeed, that employment lawyers regularly use similar analyses—was irrelevant.
What emerged from these enforcement actions was a clear pattern. Bar associations and courts were making decisions about what constituted unauthorized practice of law based on a single criterion: whether a non-lawyer was involved in the delivery of the service. If a non-lawyer was involved, it was practicing law without a license. If a lawyer was involved—even if the lawyer was just rubber-stamping the work done by a non-lawyer, or by a machine—then it was lawful practice of law.
The Inconsistency at the Heart
The logical incoherence of this framework becomes apparent when you consider how the same technologies are treated when lawyers use them.
A law firm in New York implemented an AI system called DraftAssist to help junior associates draft legal documents more quickly. The system analyzed documents the firm had drafted previously, identified patterns in how the firm's lawyers approached different kinds of agreements, and used those patterns to generate template language that the junior associates could then modify. The system was owned and deployed by the law firm, and the actual legal work was being done by licensed attorneys. No one challenged this as unauthorized practice of law. It was simply a productivity tool that the law firm was using internally.
But what if the same system had been offered as a service to individuals and small businesses who could not afford to hire a law firm? What if the interface was simplified, the form-generation made more accessible, and the system was deployed as a web-based service available to anyone? Then it would be unauthorized practice of law, and it would be shut down.
The only thing that has changed is access and licensing status of the user. The technology is identical. The quality of the output is identical. The only difference is that in one scenario, a lawyer owns the system and controls its use, while in the other scenario, consumers themselves would have direct access to it. And that difference, alone, is enough to make it legal in one case and illegal in the other.
This reveals what is actually happening. The regulation is not about the competence of the technology. The regulation is about controlling access to legal capabilities. The bar associations and courts are using the unauthorized practice of law doctrine to ensure that legal knowledge and legal capability remain the exclusive province of licensed attorneys. This is professional gatekeeping, dressed up in the language of consumer protection.
If the concern were genuinely about consumer protection—if the concern were that consumers might be harmed by using AI systems without professional oversight—then the logical solution would be to allow non-lawyers to use such systems, but to require them to do so in partnership with a supervising lawyer. You could imagine a regulatory framework that said: AI systems can be used by non-lawyers to assist with legal matters, provided that there is a licensed attorney responsible for oversight, available for consultation, and liable for malpractice if the system's outputs cause harm.
Some jurisdictions have actually moved in that direction. A handful of states have created what are known as "limited license" frameworks that allow non-lawyers to provide certain kinds of legal services under restricted circumstances and with attorney oversight. A few states have experimented with this approach in particular practice areas like landlord-tenant disputes and bankruptcy.
But most states have done the opposite. They have made clear, through bar association enforcement actions and court decisions, that AI systems engaged in providing legal assistance to non-wealthy consumers are per se illegal. There is no amount of oversight that would make them legal. There is no safeguard framework that would allow them. The prohibition is absolute.
The Economic Incentive Made Explicit
Why? The answer becomes clearer when you examine which bar associations have been most aggressive in pursuing unauthorized practice of law enforcement actions against AI systems, and which practice areas are most heavily protected.
The practice areas that have seen the most aggressive enforcement actions are the ones that generate the most volume for lawyers: bankruptcy, estate planning, real estate transactions, and family law. These are the areas in which relatively standardized forms and straightforward legal analyses can solve the problem for most consumers. These are also the areas in which lawyers make money by providing routine, high-volume services to individual consumers and small businesses.
By contrast, in practice areas that are highly specialized and in which only large corporations and wealthy individuals can afford legal services—areas like venture capital law, complex litigation, antitrust, and regulatory law—bar associations have taken a very different approach to AI and legal technology. These areas are where law firms use AI systems internally, and no one objects.
The pattern is stark. The bar associations protect consumers from AI systems that would help ordinary people with ordinary legal problems. They do not object to AI systems that help lawyers serve the wealthy more efficiently. The stated purpose of the regulation is consumer protection. The actual effect of the regulation is to preserve a system in which access to legal services remains tied directly to the ability to pay.
This became explicit in a 2024 speech by the president of the American Bar Association, in which he acknowledged that the legal profession was facing a serious problem of access to justice—that many Americans were unable to afford legal services and therefore went without legal help. He then immediately pivoted to explaining why AI systems that might help solve this problem could not be allowed. The concern, he explained, was quality and competence. But the substance of the concern was clear: if AI systems could provide competent legal assistance to people who could not afford lawyers, it would disrupt the profession's economic model.
Let me be precise about what I am claiming: the bar associations are using professional licensing and unauthorized practice of law statutes as a mechanism for economic gatekeeping. They are not doing this out of malice. They are doing it out of institutional self-preservation. And they have succeeded, largely, in maintaining a system in which consumers with money can access legal services (increasingly assisted by AI systems), while consumers without money cannot access legal services (because AI systems that might help them are prohibited).
The Intellectual Dishonesty of the Defense
When challenged on this, bar associations and courts have offered several defenses, each more unsatisfying than the last.
The first defense is that AI systems might make errors, and these errors could harm consumers. This is technically true. AI systems can make errors. But humans make errors too, and the solution to human error is not to ban humans from doing the work. The solution is to require oversight, verification, and liability insurance to protect against harm. No serious person believes that the bar associations would accept an AI system that had a 99.5 percent accuracy rate—higher than the average lawyer—if the remaining 0.5 percent of errors were covered by insurance and liability provisions.
The second defense is that legal practice is inherently complex and can only be done competently by trained lawyers with years of education. This is true in some cases. But in many cases, it is not. Tens of thousands of people file bankruptcy petitions every year, and many of them do so pro se—representing themselves, without lawyers. Not all of them make catastrophic errors. Many of them successfully navigate the bankruptcy system. If ordinary people can handle these matters competently on their own, it seems odd to insist that they cannot be given assistance from AI systems designed by teams of lawyers and engineers.
The third defense is that licensing exists to ensure accountability. Only licensed lawyers can be disciplined by bar associations. Only licensed lawyers face professional liability and malpractice exposure. Therefore, only licensed lawyers should provide legal services, because they are accountable for their conduct. This argument has surface appeal, but it proves too much. The same argument could be used to ban paraprofessionals, which bar associations have not done. And it ignores the fact that companies providing AI systems could be held liable for harms caused by their systems, and could be required to maintain insurance, and could be subject to regulatory oversight.
All three defenses share a common feature: they are stated as though they are absolute constraints on what would be acceptable, when in fact they are policy choices. The bar associations could decide that AI systems are acceptable if they meet specified accuracy standards. They could decide that certain areas of legal practice are not actually that complex. They could decide that non-lawyer providers of legal services are acceptable if they are properly supervised and insured.
Instead, the bar associations have decided the opposite. And the courts have backed them up, treating the prohibition on unauthorized practice of law as absolute and nonnegotiable, regardless of the actual competence of the systems being prohibited.
The Moment of Institutional Choice
What is particularly striking about the current moment is that this choice is happening consciously, with eyes wide open. Bar associations know that AI systems can perform legal work competently. Law firms are using them. There is no genuine disagreement about whether the technology works. The disagreement is about whether the technology should be allowed to help people who cannot afford to pay for legal services.
And bar associations have chosen: it should not. The legal profession is choosing to preserve access scarcity rather than to expand access through technology. It is choosing to maintain a system in which legal services are a luxury good available to the wealthy, rather than transforming them into a more democratized service available more broadly.
This is not conspiracy or malice. It is institutional self-preservation. Bar associations exist partly to protect consumers, but they also exist partly to protect lawyers' economic interests. When those two purposes come into conflict—when consumer protection would require allowing technology that reduces lawyer income—bar associations have consistently chosen to protect lawyer income.
The courts have enabled this choice by interpreting unauthorized practice of law statutes in ways that forbid any meaningful alternative to traditional lawyer-provided services. Courts have had the opportunity to create space for regulated non-lawyer services, supervised AI systems, and technology-enabled assistance for people who cannot afford lawyers. Instead, courts have largely shut that space down.
Consider what Jennifer Voss could have done, after the Illinois Bar Association sent the cease-and-desist letter. She could have hired an attorney to oversee the platform's operations. The attorney could have been responsible, legally and professionally, for ensuring that the forms were accurate and that the guidance was competent. This would have cost money, and it would have reduced her profit margins. But it might have made the platform legal.
When Jennifer looked into this possibility, she consulted with lawyers about whether an oversight structure would be acceptable. The answer she received was uncertain. It might work. Or the bar association might still object, and pursue enforcement action against her, forcing her to defend the platform's legality in court. Given the pattern of decisions by courts and bar associations in other jurisdictions, the risk seemed too high. She chose to shut down the business rather than fight.
And that calculation—that it is safer to give up than to fight the bar association, even if you believe you might win—is itself a form of enforcement. It is not just the cease-and-desist letter that is controlling behavior. It is the threat of costly litigation against an opponent that has already shown that it will invoke whatever legal theory it needs to protect professional interests.
The Question That Remains
As artificial intelligence becomes more capable, and more able to perform legal work, this conflict is only going to intensify. Law firms will continue to use AI to make themselves more efficient and more profitable. Consumers will continue to seek alternatives to hiring lawyers. Bar associations will continue to shut down those alternatives. The result will be a legal system in which the wealthy benefit from AI-enhanced legal services, while the poor are forced to go without legal assistance at all.
There is a different path that the profession could take. Bar associations could decide that preserving access to legal services is more important than preserving lawyer income. They could allow non-lawyers to provide legal services under supervision and regulation. They could allow AI systems to be deployed with appropriate safeguards. They could acknowledge that having slightly less income from a larger number of people receiving legal services is preferable to having more income from a smaller number while many people receive no legal services at all.
But this would require bar associations to choose the interests of the public over the interests of lawyers. And the pattern of enforcement actions, court decisions, and professional resistance suggests that this is not a choice the profession is willing to make.
Jennifer Voss shut down FormAssist because she concluded that the legal and regulatory environment made it too risky to continue. Two thousand people lost access to a service that had been working for them. Those people either hired lawyers, which most of them could not afford, or they navigated legal processes on their own, which was more error-prone and more time-consuming than using the platform had been.
The bar association called this consumer protection. From the perspective of the bar association, it was. It protected consumers from using a non-lawyer service. But it did not protect them from the problem that non-lawyer service had been solving: the inability to afford a lawyer and the need for some form of assistance in navigating the legal system.
The profession's choice to shut down technology-enabled alternatives to traditional legal services is a choice to maintain a system in which access to law is rationed by income. It is not a conspiracy. It is not malice. It is professional self-interest, enforced through regulatory action, sustained by courts, and justified through language that sounds like consumer protection.
It is also, by any meaningful measure, a betrayal of the profession's fundamental obligation: to ensure that justice is accessible. When the profession uses its regulatory power to prevent innovations that would increase access, it is not serving justice. It is preventing it. And it is doing so in the name of preventing harm to consumers—consumers who would actually be served better by the alternatives than by the status quo of having no legal assistance at all.
This is the bargain that bar associations have struck. They have decided that the profession's interests matter more than the public's access to law. They have decided to enforce this through regulatory action against anyone who tries to provide legal assistance outside the traditional framework. And they have succeeded, largely, in suppressing innovation that might threaten the profession's economic model.
For now, the system holds. The courts back the bar associations. The bar associations shut down competitors. The public gets less access to legal services. And the profession protects its income. Until something changes—until a court decides to allow regulated alternatives, or until technological disruption becomes so severe that the profession can no longer suppress it—this is how the system will work.
