Same AI hallucination movie, different law firm
This feels like watching the same movie on repeat.
The Chicago Housing Authority hired lawyers to defend a $24 million lead paint verdict, and those lawyers decided ChatGPT could find them a winning case citation. The AI obliged by inventing Mack v. Anderson, a fake Illinois Supreme Court decision that would have been perfect if it existed. Now Danielle Malaty, the partner who made the query, is out of a job at Goldberg Segalla. The ironic twist is that she previously published an article about AI ethics in law practice that completely ignored the hallucination problem.
This feels like watching the same movie on repeat. Lawyer uses AI without understanding how it works, AI hallucinates, lawyer gets sanctioned or fired, firm promises to do better. What makes this case different is the institutional review failure. After Malaty generated the brief, three other firm attorneys and the client's in-house counsel reviewed it before filing. None of them caught the fake citation because none of them actually pulled and read the case.
The firm's AI policy apparently banned using AI altogether, which explains why Malaty used it secretly and why the review process wasn't designed to catch AI-generated citations. Prohibition creates the exact conditions that make hallucinations dangerous: no institutional knowledge about verification, no standardized checking procedures, and lawyers using AI tools without telling anyone.
Smart legal departments are moving in the opposite direction. Instead of banning AI, they're building citation verification into every document workflow regardless of how the citations were generated. If you assume every cite could be wrong—whether from AI, a law student, or a partner's hazy recollection—you build systems that catch fake cases before they reach the judge. The technology isn't the problem; the verification process is.
