Quick Facts
- The Summit: IAPP Global Summit 2026 (April 7, 2026) — the world's largest privacy and data protection conference, where federal judges publicly discussed integrating AI into judicial decision-making
- The Judges: Chief Judge James Boasberg (U.S. District Court, D.C.) and Judge Allison Burroughs (U.S. District Court, Massachusetts) — two of the most prominent federal judges in the country
- Boasberg's Admission: "If AI were 95% accurate, would people come to say, 'I'd rather take AI at 95% than wait years for a judge?'" — openly floating AI-rendered decisions in administrative hearings while having personally sanctioned attorneys for using AI in his own courtroom
- The Blackout: Judge Burroughs revealed that AI features in Westlaw and LexisNexis — the two main legal research tools — have been "blacked out" in the versions used by judges, while a First Circuit pilot program lets select judges experiment with those same AI features
- The UK Parallel: Britain's judiciary has officially trained immigration judges to use Microsoft's AI Copilot to draft "decision templates," generate case outlines, and review rulings — while an immigration barrister was disciplined for using AI (ChatGPT) to prepare research citing fictitious cases
- The Immigration Backlog: 3.3 million pending U.S. immigration cases as of February 2026; 104,400 pending UK asylum appeals — backlogs being cited to justify AI-assisted judicial decisions while lawyers are sanctioned for using the same tools
- Northwestern Study: 61.6% of federal judges use AI in judicial work; 45.5% received no AI training from their courts; only 7.4% of judges actively encourage AI use in their chambers
- Sources: IAPP (Apr. 7, 2026); The Observer/Daily Mail (Apr. 6, 2026); Northwestern University/Sedona Conference Journal (Mar. 2026); TRAC Immigration (Apr. 2026)
On April 7, 2026, two of the most powerful federal judges in the United States stood before an audience of thousands at the IAPP Global Summit — the world's largest privacy and data protection conference — and said out loud what the legal establishment has spent three years pretending is not true.
Chief Judge James Boasberg of the U.S. District Court for the District of Columbia — the same judge who presides over the Foreign Intelligence Surveillance Court, the same judge who has personally sanctioned attorneys for using AI in their briefs — asked the audience to consider a question: "If AI were 95% accurate, would people come to say, 'I'd rather take AI at 95% than wait years for a judge?'"
He then added, with the particular candor that comes from being the most important trial judge in the country: "I'd feel lucky if I were 95% accurate."
Judge Allison Burroughs of the U.S. District Court for the District of Massachusetts went further. She revealed that the AI features built into Westlaw and LexisNexis — the two primary legal research platforms used by the entire federal judiciary — have been "blacked out" in the versions available to judges. Not because judges don't want AI. Because the courts are not yet ready to let them have it openly. A pilot program in the First Circuit is now experimenting with turning those AI features back on — for judges only.
Meanwhile, one day earlier, the British press reported that immigration judges in the United Kingdom have been officially trained to use Microsoft's AI Copilot to draft judgments, generate "decision templates," create case outlines, and review their own rulings for completeness. The training was authorized by Lord Justice Dingemans, the senior president of tribunals, who told judges in a training video that AI would make them "a better judge because you're completely on top of the issues."
This is the same legal system that disciplined immigration barrister Chowdhury Rahman for using ChatGPT to prepare research that cited fictitious cases. The same profession that has sanctioned more than 800 attorneys worldwide for AI-assisted filings. The same institutional apparatus that imposed $145,000 in penalties against lawyers in the first quarter of 2026 alone.
The double standard has moved from implicit to explicit. The judges are building their AI future. The lawyers are paying for it.
The IAPP Summit: When the Quiet Part Becomes the Keynote
The IAPP Global Summit is not a legal technology conference. It is the premier gathering of privacy professionals, data protection officers, and regulatory officials from around the world. That two senior federal judges chose this venue to discuss AI in the judiciary is itself significant: they were speaking to an audience of regulators, not an audience of lawyers. The message was calibrated for institutional consumption, not for the consumption of the practitioners their courts regulate.
Boasberg's remarks are worth parsing carefully, because they reveal a mindset that the legal profession has been at pains to conceal. When he suggests that AI-rendered decisions might be acceptable for "the same types of cases over and over" — specifically, administrative hearings like Social Security and immigration appeals — he is making a judgment about which categories of cases are important enough to require human attention and which can be delegated to machines.
Social Security disability hearings. Immigration asylum cases. The cases that affect the most vulnerable people in the American legal system — the disabled, the desperate, the people whose entire futures depend on the outcome of a single hearing. These are the cases that Boasberg suggests might be handled by AI. Not complex commercial litigation. Not constitutional challenges. Not the kinds of cases that wealthy clients bring to large law firms. The cases of the poor.
There is a particular cruelty in this framing. The legal establishment has spent three years telling attorneys that AI cannot be trusted to generate a reliable legal citation — that the technology's hallucination rate makes it too dangerous for the production of legal documents. And now the chief judge of the D.C. federal court is suggesting, publicly, that the same technology might be accurate enough to decide whether a disabled person receives benefits or whether an asylum seeker is deported.
The implicit logic is stark: AI is not reliable enough for lawyers to use in briefs that judges will review. But it may be reliable enough to replace the judges themselves — at least for the cases that matter least to the people running the system.
The First Circuit Pilot: When Judges Get What Lawyers Cannot Have
Judge Burroughs' revelation about the First Circuit pilot program deserves attention that it has not yet received. The two dominant legal research platforms — Westlaw (Thomson Reuters) and LexisNexis — have both integrated AI features into their products. These features use the same large language model technology that powers ChatGPT, Claude, and every other generative AI tool. They are marketed to law firms as productivity tools that can summarize cases, identify relevant precedent, and draft research memoranda.
For judges, however, these AI features have been disabled — "blacked out," in Burroughs' words. The official reason is presumably the same constellation of concerns that the courts have cited when sanctioning attorneys: hallucination risk, data security, the possibility that AI might introduce errors into judicial work product.
But the First Circuit has now created a pilot program in which selected judges can experiment with the AI features that have been disabled for the rest of the judiciary. The pilot is, by its nature, an acknowledgment that the technology has value — that judges want access to it and that the courts are preparing to give it to them.
Consider what this means in practical terms. An attorney who uses Westlaw's AI-Assisted Research to draft a brief must disclose that AI was used, must certify the accuracy of every citation, and faces sanctions ranging from monetary penalties to career-ending discipline if the AI introduces an error. A judge participating in the First Circuit pilot who uses the same AI features on the same platform to research the same legal questions faces no disclosure requirement, no certification obligation, and no sanction if the AI makes a mistake.
The technology is identical. The platform is identical. The legal research function is identical. The only difference is who is using it — and the consequences they face for using it.
Burroughs framed the AI opportunity in terms that are revealing: "Where AI will be most useful are for the things that repeat, and the things that we repeat in our chambers, we just cut and paste from one case to another." She is describing exactly the kind of work that AI excels at — repetitive, pattern-matching, document-generation tasks. She is also describing exactly the kind of work that, when performed by attorneys using AI, has resulted in sanctions, disbarments, and career destruction.
The distinction the courts are drawing is not about the technology. It is about the user. Judges may use AI for repetitive legal work because they are judges. Lawyers may not because they are lawyers. The technology does not change. The power dynamic does.
Britain Shows America Its Future: AI-Drafted Judgments, AI-Sanctioned Lawyers
If Americans want to see where the judicial AI double standard leads, they need only look across the Atlantic. The United Kingdom's judiciary has moved from pilot programs to official policy: immigration judges have been trained on Microsoft's AI Copilot and given explicit permission to use it for judicial work.
The training materials, reported by The Observer and subsequently confirmed by HM Courts and Tribunals Service, describe a comprehensive AI workflow for judges. Copilot can generate a "case outline" — an overview of the parties' evidence. It can create a "bundle summary" — a timeline of events and an outline of each side's case. It can draw up a list of disputed issues and produce a "decision template" based on those issues. It can review a judge's completed decision against a summary of the evidence and submissions, "identifying any omissions."
Lord Justice Dingemans, the senior president of tribunals, appears in a training video telling judges: "All of that work is pre-done. What that will do is mean that when you get to the hearing, you will be a better judge because you're completely on top of the issues."
Read that sentence again. The AI drafts the case outline. The AI creates the timeline. The AI identifies the disputed issues. The AI generates the decision template. The AI reviews the completed decision for omissions. The judge's role, in this workflow, is supervisory: checking the AI's work, not doing the work herself.
This is not a speculative future. This is current practice in British immigration tribunals. It is happening now, in cases that determine whether asylum seekers are returned to countries where they face persecution, in cases that decide the fate of the 104,400 people currently waiting for their immigration appeals to be heard.
And in October 2025 — six months before the judiciary officially trained its judges to use AI — immigration barrister Chowdhury Rahman was disciplined for using ChatGPT to prepare legal research. His offense: citing cases that were "entirely fictitious" or "wholly irrelevant." A tribunal found he had "failed thereafter to undertake any proper checks on the accuracy" of the AI-generated material.
Rahman used AI to prepare for a case. Judges use AI to decide cases. Rahman was disciplined. Judges were trained. The technology is the same. The professional consequences could not be more different.
The 3.3 Million Case Backlog: When Efficiency Justifies What Ethics Cannot
The justification for AI in judicial decision-making is, in every jurisdiction where it has been discussed, the backlog. In the United States, 3.3 million immigration cases are pending as of February 2026, according to the Transactional Records Access Clearinghouse at Syracuse University. In the United Kingdom, 104,400 asylum appeals are pending — a number that has nearly doubled in the last year. Social Security disability hearings in the United States have a backlog of hundreds of thousands of cases, with average wait times exceeding a year.
These backlogs are real. They cause real human suffering. People waiting years for a disability determination may lose their homes, their health, their lives. Asylum seekers waiting years for a hearing live in legal limbo — unable to work, unable to plan, unable to build a life in the country where they have sought refuge.
But the backlog argument, used to justify AI-assisted judicial decisions, exposes the fundamental dishonesty of the legal establishment's AI sanctions regime. If the courts genuinely believed that AI is too unreliable for the production of legal work — if the hallucination problem is so severe that attorneys must face six-figure sanctions, career-ending discipline, and mass-distribution humiliation orders for allowing AI errors into their filings — then the same technology cannot possibly be reliable enough to produce judicial decisions that determine the rights and futures of millions of people.
Either AI is reliable enough for legal work, in which case sanctions against attorneys who use it are disproportionate. Or AI is not reliable enough for legal work, in which case using it to draft judicial decisions is a betrayal of every person whose case is decided by that technology.
The legal establishment cannot have it both ways. It cannot sanction attorneys for the same technology it is deploying in its own operations. It cannot tell lawyers that AI citations are too dangerous to submit while telling judges that AI decision templates are a path to being "a better judge."
But it is doing exactly that. And the IAPP Summit made it explicit.
Boasberg's Paradox: The Judge Who Sanctions AI and Envisions AI Judges
Chief Judge James Boasberg occupies a unique position in the AI-and-law narrative. He is the chief judge of the U.S. District Court for the District of Columbia — one of the most important federal trial courts in the country. He presides over the Foreign Intelligence Surveillance Court. He has personally handled cases in which attorneys submitted AI-hallucinated citations, and he has sanctioned those attorneys by requiring them to pay opposing counsel's fees.
This is the same judge who, at the IAPP Summit, mused about AI-rendered decisions and said he would "feel lucky if I were 95% accurate."
The admission is more revealing than Boasberg may have intended. If a federal judge acknowledges that his own accuracy rate may be below 95% — and if he believes that an AI system operating at 95% accuracy would represent an improvement — then the entire premise of the AI sanctions regime collapses. The courts are not sanctioning attorneys because AI is less accurate than human legal work. They are sanctioning attorneys because AI threatens the monopoly that judges and the legal profession hold over the production of legal decisions.
Boasberg's candor about the limitations of human judicial accuracy is, in the context of the sanctions regime, devastating. In Q1 2026 alone, the federal courts imposed at least $145,000 in sanctions against attorneys for AI-related errors — errors that, in the vast majority of cases, involved fabricated citations that were caught before they influenced any judicial outcome. No litigant lost a case because of these fabricated citations. No judicial decision was tainted by an AI hallucination. The errors were caught, corrected, and then punished with career-ending severity.
Meanwhile, human judges make substantive errors — errors of law, errors of fact, errors of judgment — that actually affect outcomes, that actually determine whether people go to prison or go free, that actually decide whether asylum seekers are deported to danger. These errors are corrected through the appellate process when they are caught, and absorbed silently into the system when they are not. No judge faces sanctions for getting a case wrong. No judge is required to distribute a public reprimand to every litigant in every pending case when he makes a mistake.
The asymmetry is not about accuracy. It has never been about accuracy. It is about power — about who gets to use the tools that increase productivity, reduce costs, and threaten to make the traditional legal model obsolete.
The Northwestern Study: What the Numbers Actually Say About Judicial AI Use
The Northwestern University study published in the Sedona Conference Journal in March 2026 provides the empirical foundation for understanding the scope of judicial AI adoption. The numbers are worth examining in detail, because they tell a story that the IAPP Summit presentations only hinted at.
Of the 112 federal judges who responded to the survey, 61.6% reported using at least one AI tool in their judicial work. But the more granular findings are even more instructive:
30% of judges use AI for conducting legal research — the same activity that gets attorneys sanctioned when the AI produces a hallucinated citation. 15.5% use AI for reviewing documents — the same activity that, in the discovery context, has been restricted for pro se litigants through protective orders requiring enterprise-grade AI contracts. 22.4% use AI on a weekly or daily basis — meaning more than one in five federal judges are regular AI users in their judicial capacity.
But here is the finding that should alarm anyone who believes the sanctions regime is about protecting the integrity of judicial proceedings: 45.5% of judges who use AI received no training from their courts on how to use it. Nearly half of the judges using AI in their judicial work are doing so without any formal guidance on the technology's limitations, failure modes, or proper verification procedures.
Compare this to the standard imposed on attorneys. An attorney who uses AI without adequate training and verification procedures faces sanctions under Rule 11, referral to the state bar under Rule 8.4, and potentially career-ending discipline. A judge who uses AI without training faces nothing — no disclosure requirement, no certification obligation, no accountability mechanism of any kind.
The Northwestern researchers found that only 7.4% of judges actively encourage AI use in their chambers. One in five judges — 20% — formally prohibit it. The rest fall somewhere in between, with 25.9% permitting AI use and 24.1% having no policy at all. The absence of policy is, in a system that imposes mandatory disclosure requirements on attorneys, itself a form of institutional hypocrisy: the courts demand transparency from practitioners while maintaining opacity about their own AI practices.
The Access-to-Justice Inversion: AI for the Powerful, Prohibition for the Vulnerable
The pattern emerging from the IAPP Summit, the UK immigration judiciary, and the Northwestern study is not merely a double standard. It is an inversion of the access-to-justice promise that AI was supposed to fulfill.
The people who most need AI in the legal system are not federal judges with law clerks, research assistants, and institutional resources. They are the unrepresented litigants navigating complex legal systems without counsel. They are the public defenders handling crushing caseloads with inadequate staffing. They are the legal aid attorneys serving clients who cannot afford to pay for the hours of human labor that traditional legal work requires.
Instead, the legal establishment is building a system in which AI is available to the people who need it least — judges with existing institutional support — and prohibited or severely restricted for the people who need it most. Pro se litigants face protective orders requiring enterprise-grade AI contracts they cannot afford. Attorneys face sanctions for AI errors that judges commit without consequence. The technology that could democratize legal services is instead being monopolized by the institution that has the most to lose from its democratization.
When Judge Burroughs says "the things that AI are not going to be able to capture, they are judgment issues" — and then describes AI being used for everything except the judgment call itself — she is describing a system in which AI does the work and the judge gets the credit. That system is available to the judiciary. It is not available to the lawyers who appear before the judiciary, the clients those lawyers represent, or the millions of Americans who cannot afford a lawyer at all.
The Surveillance State Connection: Boasberg's Other Role
There is one additional dimension to the IAPP Summit that has received no attention and deserves a great deal. James Boasberg is not just the chief judge of the D.C. federal court. He is the presiding judge of the Foreign Intelligence Surveillance Court — the secret court that authorizes government surveillance of American citizens under the Foreign Intelligence Surveillance Act.
At the IAPP Summit — a privacy conference — Boasberg discussed AI adoption in the judiciary while sitting in the unique position of overseeing the court that authorizes the most invasive surveillance programs in the federal government. The intersection is not coincidental. AI-powered surveillance is one of the most consequential applications of artificial intelligence in the government context, and the FISA Court is the institution that authorizes it.
When Boasberg speculates about AI-rendered judicial decisions, he is speaking from a position in which the intersection of AI and judicial authority has already been actualized — in the surveillance context, where AI tools process vast quantities of communications data and the FISA Court approves applications based on material that is itself generated and organized by automated systems. The question of whether AI can participate in judicial decision-making is not theoretical for Boasberg. It is operational.
Judge Burroughs, for her part, observed that existing laws and constitutional protections "are not keeping up, never have kept up and never will keep up" with the speed of innovation. She was speaking about privacy and surveillance. But the observation applies with equal force to the AI sanctions regime: the rules governing AI use in the legal profession are not keeping up with the technology's capabilities, and the courts' response has been to sanction practitioners rather than adapt the rules.
The Institutional Logic: Why the Double Standard Persists
The two-tier AI system — judges may use it, lawyers may not — persists because it serves the institutional interests of the judiciary. Understanding those interests requires understanding the economics of judicial power.
Judges derive their authority from scarcity. There are approximately 870 active Article III federal judges in the United States. Their decisions carry weight because they are scarce, because the appellate process creates hierarchies of authority, and because the institutional prestige of the federal bench attracts the kind of human capital — law clerks from top law schools, experienced staff attorneys, talented judicial assistants — that reinforces the quality of judicial output.
AI threatens this scarcity in two ways. First, if AI can produce legal analysis and even draft decisions at scale, the argument for expanding the judiciary — or for creating alternative dispute resolution mechanisms that bypass the judiciary entirely — becomes stronger. Second, if AI makes lawyers more productive, the caseload pressure that justifies judicial AI adoption decreases, because better-prepared lawyers produce better briefs that require less judicial research and analysis.
The sanctions regime addresses the second threat. By making AI use dangerous for lawyers, the courts ensure that lawyers remain dependent on traditional legal methods — methods that are slower, more expensive, and more likely to produce the kind of complex, poorly organized filings that give judges reason to complain about their caseloads. The caseload pressure then becomes the justification for judicial AI adoption: we need AI because the lawyers aren't giving us what we need. But the reason the lawyers aren't giving the courts what they need is that the courts have made it too dangerous for lawyers to use the tools that would let them do so.
This is a self-reinforcing cycle, and it is not accidental. The judiciary benefits from a system in which lawyers are constrained while judges are not, because that system preserves the power differential that defines the relationship between bench and bar.
The Question No One at the IAPP Summit Asked
In the audience at the IAPP Global Summit, as Boasberg floated AI-rendered decisions and Burroughs described the First Circuit pilot program, a question went unasked. It is the question that should have been asked, and it is the question that the legal establishment will eventually be forced to answer:
If a judge uses AI to draft a decision in an immigration case, and the AI introduces an error — a misstatement of law, an incorrect factual finding, an omission that changes the outcome — who is accountable?
Not the AI. Not the vendor. Not the court administration that authorized the AI's use. Under current doctrine, the judge is accountable — but judicial accountability operates through the appellate process, not through sanctions, discipline, or career consequences. A judge whose AI-assisted decision contains an error will be reversed on appeal, if the error is caught. If it is not caught — if the losing party cannot afford an appeal, if the error is subtle enough to escape appellate review, if the case is an immigration removal that has already been executed by the time the appeal is decided — the error is absorbed into the system without consequence.
An attorney whose AI-assisted brief contains an error faces $109,700 in sanctions, career-ending discipline, mass-distribution humiliation orders, and referral to the state bar. A judge whose AI-assisted decision contains an error faces reversal — maybe — and nothing else.
The question Boasberg should have been asked is not whether AI at 95% accuracy is good enough for administrative hearings. It is whether the people whose lives are decided in those hearings deserve the same protections against AI error that the courts demand for every brief filed by every attorney in every case in the country.
The answer, implicit in the legal establishment's two-tier AI system, is no. The powerful get the tools. The vulnerable get the consequences.
Conclusion: The Veil Has Been Lifted
What happened at the IAPP Global Summit on April 7, 2026 was not a policy announcement. It was not a rule change. It was not a formal judicial order. It was something more consequential: an admission.
Two federal judges, speaking to an audience of regulators at the world's most important privacy conference, admitted that the judiciary is preparing to use AI for judicial decision-making. They admitted that AI features in legal research platforms have been specifically disabled for judges — and that a pilot program is underway to re-enable them. They admitted that AI might replace judges in certain categories of cases. And they admitted, without any apparent awareness of the irony, that all of this is happening while the same judiciary imposes the most severe professional consequences in the history of American attorney discipline on lawyers who use the same technology for the same purposes.
The veil has been lifted. The legal establishment's AI sanctions campaign is not about accuracy, not about hallucinations, not about protecting clients or the integrity of the judicial process. It is about building a two-tier system in which the institution that controls the legal profession gets first access to the most transformative technology in the profession's history, while the practitioners who serve the public — and the public itself — are forced to wait, to pay, and to suffer the consequences of a prohibition that the prohibitors have no intention of applying to themselves.
Boasberg said he would feel lucky if he were 95% accurate. Burroughs said existing protections will never keep up with innovation. Dingemans told British judges that AI would make them "a better judge."
Meanwhile, somewhere in America, an attorney is being sanctioned for a fabricated citation. Somewhere, a pro se litigant is being told she cannot use ChatGPT to review her discovery documents. Somewhere, a public defender is deciding not to use AI because the risk to his career is too great — and his client will receive slower, less thorough representation as a result.
The judges know AI works. They are using it. They are building pilot programs to use it better. They just don't want you to use it too.
Sources and Citations
- IAPP. (Apr. 7, 2026). "US federal judges discuss the intersection of emerging technology, AI with the legal system." iapp.org
- The Observer / Daily Mail. (Apr. 6, 2026). "Immigration judges are using chatbots to check and draft rulings to deal with record court backlog." dailymail.co.uk
- Northwestern University / Sedona Conference Journal. (Mar. 2026). "Federal judges report broad adoption of AI tools." news.northwestern.edu
- TRAC Immigration. (Apr. 2026). "Immigration Court Quick Facts: 3,318,099 pending cases as of February 2026." tracreports.org
- NPR. (Apr. 3, 2026). "Penalties stack up as AI spreads through the legal system." npr.org
- ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties." complexdiscovery.com
- Linna, D., Subrahmanian, V.S. et al. (2026). "Artificial Intelligence in Federal Courts." The Sedona Conference Journal.
- HM Courts and Tribunals Service (UK). Official statement on AI use in immigration tribunals (Apr. 2026).
