Agents of change
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
The real constraint on agentic AI isn't model capability—it's governance infrastructure. Organizations treat agentic platforms as LLM deployment vehicles when they need complete enterprise systems with guardrails, evaluation layers, and audit mechanisms built in.
G2 data shows 60% of companies have AI agents in production with under 2% failure rates—contradicting MIT predictions of 95% project failure. For legal teams, this means governance frameworks can't wait for academic consensus when systems are already deployed.
Most companies are building autonomous AI capabilities faster than they can deploy them safely. The gap shows up in identity systems that can't handle agent credentials, APIs built for humans rather than machines, and costs that spiral when agents loop endlessly.
When each step has 98% accuracy, a 20-step process drops to 70% reliability. That compound probability problem explains why AI agents excel at narrow tasks but need deterministic scaffolding for enterprise workflows—and why hybrid systems win.
That changes everything from procurement to integration to the economics of enterprise software.
Backends are retreating to governance roles while AI agents become the execution layer. InfoQ's analysis shows this architectural shift is already happening in production at banks, healthcare systems, and call centers—with major implications for legal teams.
Organizations are building AI compliance functions like they built human compliance departments—but without the foundational work of defining what compliance means for autonomous systems that operate in unanticipated contexts.