Surge AI's leaked safety doc shows why your AI guidelines need lawyers
That's the real tension for product teams—every safety guardrail you remove increases utility but also increases the blast radius when something slips through.
Signals are quick snapshots of emerging changes in AI, law, and technology—highlighting patterns to notice before they fully unfold.
That's the real tension for product teams—every safety guardrail you remove increases utility but also increases the blast radius when something slips through.
AI agents need reliability built into their architecture from day one to avoid the ongoing "reliability tax" of operational breakdowns, legal exposure, and reputational damage from unreliable autonomous systems.
Harvey's new alliance program with Stanford, UCLA, NYU, Michigan, and Notre Dame
New research gives AI agents procedural memory that learns from failures and transfers between tasks. Early results show higher success rates with lower token costs—potentially solving the economics that have held back agent adoption.
On the In-House podcast, I shared why AI won’t erase lawyers — but it will change every role inside the legal function. Tools may act like lawyers, yet judgment and oversight remain squarely human.
Block and GSK show successful AI agents adapt to existing workflows rather than forcing teams to rebuild around technology. The key is making AI feel invisible while amplifying human expertise.
RAG isn't broken—it's that we treated it as the default when it should have been the exception.
YouTube's July 15 Partner Program update targets AI-generated filler while protecting legitimate creators—creating a template that other platforms facing similar content quality pressures will likely follow.