Accuracy is not everything, even to lawyers
Technical accuracy gets you to functional. User comprehension gets you to transformational.
Technical accuracy gets you to functional. User comprehension gets you to transformational.
AI agents are shifting from copilots to autopilots, and Noam Kolt warns their speed, opacity, and autonomy demand governance rooted in inclusivity, visibility, and liability—urgent work for product and legal teams before regulation arrives.
The intersection of AI agents and enterprise accountability fascinates me, particularly the challenge of building systems that can operate autonomously while maintaining complete audit trails and decision traceability.
That's the real tension for product teams—every safety guardrail you remove increases utility but also increases the blast radius when something slips through.
AI agents need reliability built into their architecture from day one to avoid the ongoing "reliability tax" of operational breakdowns, legal exposure, and reputational damage from unreliable autonomous systems.
New research gives AI agents procedural memory that learns from failures and transfers between tasks. Early results show higher success rates with lower token costs—potentially solving the economics that have held back agent adoption.
On the In-House podcast, I shared why AI won’t erase lawyers — but it will change every role inside the legal function. Tools may act like lawyers, yet judgment and oversight remain squarely human.
Block and GSK show successful AI agents adapt to existing workflows rather than forcing teams to rebuild around technology. The key is making AI feel invisible while amplifying human expertise.