The Perplexity problem: when AI assistants challenge web infrastructure assumptions
AI systems won't fit our old categories, and our legal frameworks haven't caught up yet.
AI systems won't fit our old categories, and our legal frameworks haven't caught up yet.
The architectural shift to persistent, structured memory is happening now. Teams building these systems need to classify the memory types their agents require and define the associated governance policies upfront.
The most practical insight involves accepting that perfect solutions do not yet exist. Traditional agency mechanisms provide valuable frameworks for identifying problems and structuring solutions; however, new technical and legal infrastructure must evolve in tandem with the technology.
Ruth Porat's human-in-the-loop mandate gives product teams a concrete framework for building agentic systems that users will trust and regulators will accept.
After Stanford Law's agentic AI program, it was clear: companies are building autonomous capabilities faster than they can deploy them responsibly. This is part of a series exploring organizational frameworks that can keep pace with AI autonomy, which emerged from that program.
TL;DR: The rapid deployment of agentic AI systems across organizations has created an urgent need for comprehensive traceability and auditability fram…
Companies that succeed with AI agents aren't just automating tasks—they're choosing between rebuilding workflows around agents or adapting agents to existing human patterns. The key is knowing which approach drives adoption.
University of Washington framework argues AI agent autonomy should be a deliberate design choice separate from capability, proposing five user role levels from operator to observer.