Death by Dialogue: The Case for Killing the Legal Chatbot
Legal work needs density, not dialogue.
Reflections are deeper dives into how law, technology, and innovation intersect. These longer form pieces analyze research and emerging trends — offering perspectives that help teams navigate what's coming next.
Legal work needs density, not dialogue.
AI agents fail because nobody defined what "customer" means in your business. Ontology infrastructure provides semantic guardrails that technical controls alone can't deliver.
Agentic AI's real failure point isn't the model — it's the data pipeline. When agents act autonomously on corrupted data, output guardrails can't save you. Your data needs a constitution, not better prompts.
Karpathy says vibe coding is passé. The new term is "agentic engineering" — and for legal and product teams, the distinction is a governance question, not a branding one.
Google research shows AI models that simulate internal debates dramatically outperform those that reason in monologue. For governance teams, the implication is clear: if dissent drives accuracy, hiding the chain-of-thought undermines trust.
RAG didn't die — it got rebranded as "context engineering." Kinda...
2026 isn't about new AI capabilities — it's about stabilizing the ones we already have. For product counsel, governance built on shifting tools is governance built on sand.
The authors suggest treating AI agents as "legal actors" — entities that bear duties — without granting them legal personhood.