AI Is Transforming Legal Practice — Not Always for the Better
Artificial intelligence tools like ChatGPT, Claude, and legal-specific AI platforms are being used by attorneys across the country to draft briefs, conduct research, summarize depositions, and generate contract language. Used carefully, AI can make legal work faster and more accessible. Used carelessly, AI can produce legal filings that contain fabricated case citations, misstatements of law, and hallucinated facts — leading to sanctions, dismissals, and serious harm to clients.
The Hallucination Problem
AI language models generate text by predicting what words are likely to follow other words, based on patterns in their training data. They do not "know" facts in the way humans do — they generate plausible-sounding text that may or may not be accurate. When asked to cite legal cases, AI models will sometimes generate citations to cases that do not exist, or accurately cite the case name but fabricate the holding, the facts, or the quotations.
This is known as "hallucination," and it has led to serious consequences in real cases. In the now-famous Mata v. Avianca case (S.D.N.Y. 2023), an attorney submitted a brief containing citations to six nonexistent cases generated by ChatGPT, and was sanctioned $5,000. Similar incidents have followed in courts across the country, with sanctions ranging from monetary fines to bar referrals.
What Courts Are Doing About It
Courts have responded to AI-generated hallucinations with a patchwork of standing orders requiring attorneys to disclose AI use and certify the accuracy of all citations. As of 2026, more than 300 federal judges have issued AI-related standing orders. Many state courts have followed. The Ethics Reporter covers all major AI sanctions cases — see our AI Sanctions topic page for the full picture.
Your Rights as a Client
As a client, you have the right to competent representation. An attorney who submits AI-generated work without verifying its accuracy may be violating their duty of competence under Rule 1.1 of the Model Rules of Professional Conduct. If you suspect your attorney has submitted inaccurate AI-generated work in your case, you have grounds for a bar complaint and potentially a malpractice claim.