Stop acting surprised when AI tools produce garbage citations

2 min read
Stop acting surprised when AI tools produce garbage citations
Photo by Pawel Czerwinski / Unsplash

I spent Tuesday morning reading about a Georgia trial judge who signed off on an order citing two completely fake cases—cases that never existed, hallucinated by AI and never caught until appeal. The husband's lawyer doubled down by citing eleven more fake cases to defend the original fakes, even requesting attorney's fees based on a made-up decision.

Catastrophe feels like the wrong word here, but this moment in Shahid v. Esaam marks the crossing of a line we've been nervously watching. Joe Patrice at Above the Law captured it perfectly: there's a critical difference between submitting fake cases (embarrassing, sanctionable) and judges acting on them (system-breaking). The appellate court had to cite Chief Justice Roberts' 2023 warning about AI hallucinations—suddenly that much-mocked "typewriter report" looks prescient.

I keep thinking about the workflow that failed here. The opposing party was pro se, so no lawyer caught the fakes. The trial judge apparently rubber-stamped a proposed order without checking citations. The system's safeguards—adversarial process, judicial scrutiny—both missed. That's not a technology problem, it's a process problem that technology is exploiting.

For product teams building AI tools for lawyers, this case study writes your testing requirements. Your users will try to submit hallucinated citations, and some percentage will slip through human review. The question isn't whether to build guardrails, it's how many layers and what kind of friction you're willing to accept to prevent your tool from producing this outcome. 🏗️

Trial Court Decides Case Based On AI-Hallucinated Caselaw - Above the Law
Appellate court to trial judge: you know these cases are made up, right?