Law schools build AI fluency through hands-on training programs
Law schools teach AI verification skills through hands-on training. Yale students build models then hunt for hallucinations. Penn gives 300 students ChatGPT access. Early movers create graduates who understand AI capabilities.
Law schools are finally catching up to what practicing attorneys learned through sanctions: AI without judgment is expensive. The methodology matters more than the mandate, and schools are getting creative about how they teach verification skills.
Yale's approach particularly intrigues me. Rather than treating AI as a research tool, Scott Shapiro has students build and train models, then deliberately hunt for hallucinations. This teaches pattern recognition for unreliable outputs, which transfers to evaluating any automated legal research.
Penn's decision to give 300 students access to ChatGPT Edu in legal writing classes signals something important. They're normalizing AI as part of the writing process while maintaining academic oversight. The secure environment lets students experiment without the typical academic integrity concerns that have paralyzed other institutions.
Chicago's course titles—"Generative AI in Legal Practice," "Editing, Advocacy, and AI"—suggest they understand this involves rebuilding fundamental legal skills for a world where first drafts might come from machines, and judgment and verification remain human responsibilities.
The gap between elective courses (36% including AI) and doctrinal classes (12%) shows schools are still figuring out integration. Early movers are creating graduates who understand AI's capabilities and limitations, while their competitors are still debating honor code language.