Agents of change
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
Signals are quick snapshots of emerging changes in AI, law, and technology—highlighting patterns to notice before they fully unfold.
AI agents can do real work or generate chaos. The difference isn't capability—it's human judgment.
The real constraint on agentic AI isn't model capability—it's governance infrastructure. Organizations treat agentic platforms as LLM deployment vehicles when they need complete enterprise systems with guardrails, evaluation layers, and audit mechanisms built in.
G2 data shows 60% of companies have AI agents in production with under 2% failure rates—contradicting MIT predictions of 95% project failure. For legal teams, this means governance frameworks can't wait for academic consensus when systems are already deployed.
Mastercard's Agent Pay creates verifiable authorization trails for AI transactions, embedding accountability directly into payment infrastructure rather than treating it as an afterthought.
Concentric AI found Copilot accessed nearly 3 million confidential records per organization in six months—more than half of all externally shared files. The traceability challenge: documenting which data informed each AI-generated output.
This deal aims to save millions in costs and provides a legal shield against copyright lawsuits from public data scraping. However, the move—executed post-strike—heightens the unresolved IP conflict over creator consent for AI training.
OpenAI wants ChatGPT conversations legally privileged, but traditional privilege requires professional accountability. For deployers, this means discovery blind spots—your team uses AI for strategy, you get sued, you can't access the conversations.
Alibaba's offline training framework creates pre-aligned agent models without API costs, making custom research agent development more economically feasible for enterprises with domain-specific needs.