Independent Legal Ethics Journalism
April 9, 2026

The AI Discovery Lockdown: How Courts Are Using Protective Orders to Strip AI Tools from the People Who Need Them Most

The AI Discovery Lockdown: How Courts Are Using Protective Orders to Strip AI Tools from the People Who Need Them Most

Quick Facts

  • The Case: Morgan v. V2X, Inc., No. 1:25-cv-01991 (D. Colo., Mar. 30, 2026) — Magistrate Judge Maritza Dominguez Braswell
  • The Situation: A pro se plaintiff (self-represented individual) in an employment discrimination case wanted to use AI tools to process confidential discovery materials. Corporate defendant V2X, Inc. demanded disclosure and restrictions on which AI tools the plaintiff could use.
  • The Order: Court issued a modified protective order requiring any AI tool used with confidential discovery materials to have: (1) contractual prohibitions on using inputs to train AI models, (2) restrictions on onward disclosure, and (3) data deletion rights — effectively banning consumer AI tools like ChatGPT and Google Gemini from use with confidential materials
  • The Effect: Large corporations and large law firms with enterprise AI (secure, closed-circuit environments) may continue using AI. Individual pro se litigants — who need AI most — are largely barred from using the only AI they can afford.
  • Related Cases: Warner v. Gilbarco, Inc. (Feb. 10, 2026): AI-assisted materials protected as work product; United States v. Heppner (Feb. 2026): AI chat logs NOT privileged, can be seized and used as prosecution evidence
  • Analysis Sources: Herbert Smith Freehills Kramer (Apr. 8, 2026); Sidley Austin (Apr. 6, 2026); Everlaw (Apr. 2, 2026)
  • Pattern: Courts are expanding AI control beyond the courtroom — now regulating how parties use AI during the entire litigation process, even in private preparation

The legal profession's war on AI has opened a new front — one that has nothing to do with hallucinated citations, nothing to do with attorneys filing bad briefs, and everything to do with controlling who gets to use artificial intelligence while fighting a lawsuit.

On March 30, 2026, a federal magistrate judge in the District of Colorado issued a ruling in Morgan v. V2X, Inc. that, dressed up in the language of data privacy and confidentiality protection, effectively told a self-represented individual he could not use ChatGPT, Google Gemini, or any other widely available consumer AI tool to process confidential discovery materials. His adversary — a well-funded corporate defendant with access to enterprise AI platforms operating in "secure, closed-circuit environments" — faces no such restriction.

The legal establishment has now built a two-tiered AI justice system. In tier one: large law firms and corporations, which can use AI in their privileged, enterprise-licensed, contractually secured environments. In tier two: pro se litigants, public defenders, small-firm solo practitioners, and ordinary Americans trying to navigate a complex legal system without attorneys — people who rely on the same consumer AI tools that the courts have now declared insufficient for use with confidential materials.

The court called AI "one of the most powerful knowledge tools ever to become available to the masses." Then it restricted the masses from using it.

The Case: A Pro Se Plaintiff, a Corporate Defendant, and a Battle Over AI Discovery

The Morgan case is, on its surface, an employment discrimination lawsuit. The plaintiff — an individual appearing without counsel against V2X, Inc., a government services and defense contractor — is fighting the kind of legal battle that most people cannot afford to fight with professional representation. Employment discrimination cases are complex, document-intensive, and often turn on the careful analysis of voluminous discovery materials: internal communications, HR records, performance reviews, personnel files.

These are exactly the kinds of documents that AI tools excel at processing. A pro se litigant who can upload discovery materials to an AI and ask "does anything in these documents suggest discriminatory treatment?" has a dramatically better chance of identifying relevant evidence than one who must manually read every document. For litigants who cannot afford attorneys, AI-assisted discovery review is not a luxury — it is the difference between mounting a real defense and going through the motions.

The plaintiff in Morgan understood this. He sought to use AI tools to bridge the technology gap between himself and his corporate adversary. V2X raised an alarm: what AI tools was he using? Was he uploading the company's confidential trade secrets and personnel files to ChatGPT? To Gemini? To AI platforms that might retain, analyze, or use that data for model training?

These are not entirely unreasonable questions. Consumer AI platforms do, under some configurations and in some historical periods, retain user inputs. A company's confidential personnel files uploaded to a commercial AI platform could theoretically be exposed in ways that a standard protective order does not contemplate.

But the court's response to this concern did not protect both parties equally. It protected the corporation.

What the Court Actually Ordered — and What It Actually Means

Magistrate Judge Maritza Dominguez Braswell issued a modified protective order requiring that any AI tool used to process confidential discovery materials must be subject to three contractual safeguards: (1) a prohibition on using inputs to train AI models; (2) restrictions on onward disclosure of uploaded data; and (3) the ability to delete data uploaded by the user.

These requirements sound reasonable until you ask: which AI tools meet them? As of March 30, 2026, the answer is: enterprise AI platforms marketed to large organizations with sophisticated procurement and legal teams. The consumer-facing versions of ChatGPT, Google Gemini, Microsoft Copilot, and similar tools — the tools that ordinary people actually use — do not come with these contractual guarantees in their standard free or low-cost tiers. Some enterprise versions do. But enterprise versions cost money that pro se litigants do not have, require contracting processes that solo practitioners find burdensome, and involve due diligence that small firms lack the resources to conduct.

The court acknowledged this directly. It recognized that its requirements would "limit the use of most widely available, consumer-facing AI tools for confidential discovery materials." It acknowledged the distinction between AI tools operating "in a secure, closed-circuit environment" and less secure alternatives. And it issued the order anyway.

The result is a court order that explicitly acknowledges AI as "one of the most powerful knowledge tools ever to become available to the masses" — and then restricts the masses from using it in precisely the contexts where they most need it.

V2X, Inc. — represented by professional counsel with access to enterprise legal technology platforms — continues to use AI in its litigation preparation without restriction, because its AI tools are already in compliance with the contractual requirements the court imposed. The pro se plaintiff must either find and contract with an enterprise AI provider, forgo AI assistance entirely, or risk violating the protective order.

The Fourth Amendment Detour: When Courts Acknowledge Reality Before Ignoring It

The most revealing passage in Judge Braswell's order is the court's discussion of the Fourth Amendment and the reasonable expectation of privacy. The court asked, openly: does routing data through a third-party system — like Gmail — forfeit all privacy protections? The court answered no. "Routing information through a third-party system does not forfeit all privacy," Judge Braswell wrote, contextualizing the analysis in Fourth Amendment case law about search and seizure.

This is exactly right. And it applies to AI tools as well. If routing email through Google's servers does not forfeit privacy, why does uploading a document to ChatGPT for analysis forfeit the confidentiality protection of the discovery materials contained in that document?

The court's answer was essentially: because AI chatbots are different. Unlike passive search engines, AI platforms are "specifically designed and trained to engage." They "invite candid and significant disclosure of information, including sensitive information. They simulate empathy, foster trust, and interact in a way that feels genuine and intimate."

For pro se litigants specifically, the court noted, AI interactions "closely resemble the kind of confidential, strategy-laden iterative work product" that the work product doctrine was designed to protect.

And then the court restricted those very interactions.

The court simultaneously acknowledged that pro se litigants' AI interactions deserve work product protection — and imposed restrictions that make using AI for discovery analysis practically impossible for pro se litigants. If the AI interaction deserves protection, why does the court need to restrict it? If it needs to be restricted, how can it simultaneously be protected work product?

The answer is that the court was not actually trying to resolve a coherent legal theory. It was trying to accommodate a corporate defendant's anxiety about a pro se plaintiff using AI to level a technological playing field that has never been level.

The Circuit Split That Reveals the Contradiction

The Morgan ruling is part of a trilogy of cases from early 2026 that collectively reveal just how incoherent the legal profession's approach to AI has become.

In United States v. Heppner (S.D.N.Y., Feb. 2026), a federal court held that a criminal defendant's AI chat logs are not protected by attorney-client privilege or the work product doctrine, can be seized by search warrant, and can be used as prosecution evidence. The reasoning: an AI chatbot is not an attorney, so communications with it are not privileged.

In Warner v. Gilbarco, Inc. (D. Colo., Feb. 10, 2026), a different federal court held the opposite: a pro se plaintiff's AI-assisted materials were protected work product, because they reflected the plaintiff's own mental impressions. The court also held that using a public AI tool did not, by itself, constitute waiver of work product protection — directly rejecting the corporate defendant's argument that uploading materials to ChatGPT forfeited protection.

In Morgan v. V2X, Inc. (D. Colo., Mar. 30, 2026), a third court attempted to split the difference: the identity of the AI tool might be required to be disclosed (not protected work product), but the interactions themselves might be protected. Then it restricted those interactions anyway through a protective order that effectively requires enterprise AI.

Three cases. Three different answers. One overarching pattern: in each case, the ruling fell against the individual and in favor of institutional power. In Heppner, the prosecution gets the criminal defendant's AI thinking. In Warner, the court protects the pro se plaintiff's AI work — but only in theory, because the ruling creates no mechanism to prevent future cases from being decided differently. In Morgan, the court protects corporate confidentiality by restricting the pro se plaintiff's AI use.

The circuit split on AI privilege is now real. And the legal profession — including the courts — has shown no interest in resolving it in ways that expand access to justice.

The Jeffries Case: AI Restrictions Spreading Across Discovery

The Morgan ruling is not isolated. In Jeffries v. Harcros Chemicals, Inc., another case analyzed by Sidley Austin in its April 6 overview of AI and discovery protective orders, courts are confronting related disputes about how protective orders should address AI use more broadly. The Sidley analysis notes that "disagreements about how protective orders should address the use of AI in discovery — issues previously handled through negotiation — now will be informed by guidance from the courts."

Translation: lawyers used to work out AI-related discovery disputes privately, through negotiation. Now courts are stepping in to set the rules — and as Morgan demonstrates, when courts step in, the rules tend to advantage large, well-resourced parties over individuals.

The pattern is expanding beyond hallucinated citations and sanctioned attorneys. The legal establishment has recognized that AI represents a fundamental threat to the information asymmetry that makes professional legal representation valuable. If a pro se plaintiff can use AI to analyze 10,000 pages of corporate discovery documents as effectively as a $500-per-hour associate, the professional monopoly is threatened. The discovery process — one of the most expensive and attorney-intensive phases of litigation — becomes affordable.

The solution, as Morgan demonstrates, is to use protective orders to restrict AI use for the party that needs it most, while leaving the party that has always had access to sophisticated legal technology free to continue using it.

The Access-to-Justice Crisis This Ruling Exacerbates

The legal profession has known for decades that it is failing the public on access to justice. The Justice Gap — the yawning disparity between legal need and legal service delivery — affects roughly 80 percent of low-income Americans' civil legal needs. In a country where hourly attorney fees range from $150 to over $1,000, the legal system is functionally closed to most of its users.

AI was supposed to change this. Legal tech companies, access-to-justice advocates, and legal reformers have pointed to AI-assisted legal tools as the most promising development in the history of the access-to-justice movement. If AI can help ordinary people understand their rights, analyze documents, draft pleadings, and navigate procedural requirements, the gap between those who can afford lawyers and those who cannot narrows dramatically.

Morgan v. V2X narrows it back. By requiring enterprise-grade AI contracts as a condition of using AI in discovery, the court has ensured that the technology remains accessible to the parties who already had access to sophisticated tools — and inaccessible to the parties who needed it to compensate for that disparity.

Judge Braswell's order does not say "pro se litigants cannot use AI." It says "any AI tool used with confidential discovery materials must have contractual safeguards." But the practical effect is the same: a corporate defendant with an in-house legal technology team and existing enterprise AI contracts can comply effortlessly. A self-represented plaintiff without legal training, without institutional resources, and without the ability to negotiate enterprise software contracts cannot.

The ruling also imposes a disclosure burden that does not exist for attorneys. Pro se litigants must, under the order, be prepared to disclose what AI tool they are using and whether it meets the specified contractual requirements. Attorneys at large firms using enterprise AI are under no comparable obligation to demonstrate their AI tools' compliance with data security requirements in every case where AI is used.

The asymmetry is not accidental. It is the architecture of institutional gatekeeping, applied to the one technology that most directly threatens the profession's monopoly on legal work.

The Data Privacy Pretext: Real Concern, Weaponized Response

It is worth acknowledging what is true in the legal profession's AI concern: data privacy in discovery is a real issue. Consumer AI platforms have, at various points in their development, retained user inputs and used them to improve their models. Uploading a client's confidential personnel files to a tool that trains on user data is a genuine risk that a sophisticated attorney should carefully evaluate.

But the legal profession's response to this real concern — using court orders to restrict AI use by pro se litigants and small practitioners while leaving enterprise users unaffected — is wildly disproportionate to the actual risk, and the disproportion is not random. It systematically advantages the parties who already benefit from the information asymmetry at the heart of modern litigation.

A more balanced approach would look something like this: require all parties — including represented parties and their counsel — to comply with equivalent AI data security requirements. Impose the same protective order restrictions on corporate counsel using Westlaw's AI-assisted research, Harvey, CoCounsel, Lexis+AI, and other enterprise legal AI platforms that the court imposed on the pro se plaintiff using ChatGPT. Make the requirement neutral and universal, not selectively applicable to the party who most needs AI assistance.

This has not happened. And it will not happen, because the courts — staffed by attorneys who spent their careers in a profession built on the economics of legal scarcity — are not inclined to impose on large law firms the same restrictions they impose on self-represented individuals.

The data privacy concern is real. The weaponization of that concern against the parties who most need AI is a choice — an institutional choice that reveals whose interests the courts are actually serving.

The Forbes Analysis: AI Sanctions Are Accelerating, Not Deterring

One irony embedded in all of this institutional resistance is that it is not working. Forbes contributor Lance Eliot, writing on April 6, 2026, analyzed the statistical prevalence of AI hallucination sanctions in legal filings and found that the pace of AI adoption among attorneys is accelerating despite — and in some ways because of — the sanction regime. Attorneys who have seen colleagues sanctioned are not abandoning AI; they are adopting more sophisticated AI-use practices and, in many cases, using AI more carefully and more thoroughly than they did before the sanctions began.

The sanctions are not deterring AI adoption. They are selecting for more sophisticated AI users. Attorneys who use AI carelessly, without verification, without understanding the tools' failure modes, are being filtered out by the sanction regime. Attorneys who understand AI and use it responsibly are continuing to adopt it, because the productivity advantages are simply too significant to abandon out of fear.

This is exactly what happens when an institution tries to use regulatory deterrence against a technology that is fundamentally superior for the tasks it performs. The technology wins. The question is whether the regulatory deterrence does sufficient collateral damage — to access to justice, to pro se litigants, to the attorneys who face disproportionate penalties — before the institution accepts that it cannot suppress the technology through punishment.

The answer, based on the Morgan ruling and the accelerating pace of AI-related court orders, is that the collateral damage will be substantial. The profession is not conceding gracefully. It is imposing as many costs as possible on the way to losing a battle it cannot win.

What Comes Next: AI in Discovery as the New Battleground

The Morgan ruling is, in the view of legal technologists at Everlaw, a "potential blueprint for modern litigation." Courts across the country will read Judge Braswell's order and consider imposing similar requirements in their own cases. Lawyers representing corporate defendants will begin including AI-restriction language in their proposed protective orders as a matter of routine. And pro se litigants, who are not represented by lawyers and who are not reading the Sidley Austin AI discovery briefings, will continue using consumer AI tools without knowing that they may be violating court orders.

The discovery battleground is particularly dangerous territory for the legal establishment's AI resistance campaign, because it is here that the access-to-justice implications are most stark. In the courtroom, the argument for AI restrictions has at least a veneer of accuracy integrity: courts need to trust citations. In discovery — the private, attorney-supervised process of exchanging and reviewing documents before trial — the argument for AI restrictions has no client-protection rationale whatsoever. It is pure competitive advantage maintenance: limiting the tools available to the party that starts at a disadvantage.

When Judge Braswell acknowledged that AI is "one of the most powerful knowledge tools ever to become available to the masses" and then restricted the masses from using it in discovery, she encapsulated in a single ruling the contradiction at the heart of the legal profession's AI policy: the profession acknowledges the technology's democratizing potential while systematically working to ensure that democratization does not occur.

Conclusion: The Battlefield Has Moved from the Courtroom to the Case File

For three years, the legal establishment's campaign against AI focused on the courtroom: sanctioning attorneys who submitted hallucinated citations, imposing mandatory disclosure requirements on AI-assisted filings, humiliating practitioners who failed to verify AI output. The target was the end product — the document filed with the court.

Morgan v. V2X marks a new phase. The battlefield has moved from the courtroom to the case file — from the documents attorneys submit to courts, to the process by which litigants prepare their cases, review evidence, and develop legal strategy. Courts are now asserting jurisdiction over how parties use AI in the privacy of their own litigation preparation, imposing restrictions that extend the profession's AI gatekeeping from the filing cabinet to the desktop.

For pro se litigants — the people for whom AI was always most transformative — this expansion of the gatekeeping perimeter is the most dangerous development yet. It is one thing to require that attorneys verify AI-generated citations before filing briefs. It is another to require that ordinary people fighting corporate defendants in federal court use only enterprise-grade AI that costs money they don't have, in compliance with data security contracts they have no capacity to negotiate.

The legal profession continues to call this consumer protection. It continues to invoke data privacy, judicial integrity, and professional responsibility as the justifications for AI restrictions that happen, in every case, to advantage institutional parties over individuals.

But the pattern is now too consistent to ignore. When a court acknowledges AI as a powerful tool for the masses and then restricts the masses from using it — in the same order, in the same paragraph — the legal establishment's priorities have become transparent.

It is not protecting consumers. It is protecting itself.


Sources and Citations

  • Morgan v. V2X, Inc., No. 1:25-cv-01991 (D. Colo., Mar. 30, 2026). Opinion PDF
  • Herbert Smith Freehills Kramer. (Apr. 8, 2026). "US Courts Find Privilege Applies to Use of Public AI Tools by Self-Represented Litigants." hsfkramer.com
  • Sidley Austin LLP. (Apr. 6, 2026). "Generative AI in Discovery: Protective Orders as an Emerging Point of Dispute." sidley.com
  • Everlaw. (Apr. 2, 2026). "Morgan v. V2X Decision Marks Signals a Turning Point for AI Data Privacy." everlaw.com
  • United States v. Heppner, 2026 U.S. Dist. LEXIS 32697 (S.D.N.Y., Feb. 2026).
  • Warner v. Gilbarco, Inc., No. 1:22-cv-00481 (D. Colo., Feb. 10, 2026).
  • Forbes / Lance Eliot. (Apr. 6, 2026). "Analyzing the Statistical Prevalence of Lawyers Getting Snagged by AI Hallucinations in Their Court Filings." forbes.com
  • NPR. (Apr. 3, 2026). "Penalties Stack Up as AI Spreads Through the Legal System." npr.org
  • Federal Rules of Civil Procedure, Rule 26(b)(3) (Work Product Doctrine).