PSR Field Report- Privacy law meets AI agents: Why consent is just the beginning

3 min read
PSR Field Report- Privacy law meets AI agents: Why consent is just the beginning

In October I spent two days at IAPP Privacy. Security. Risk. 2025 in San Diego, watching 500+ practitioners solve problems that didn't exist two years ago. The conversations kept circling back to a tension I've written about before: we're building AI systems faster than we're building accountability structures around them. What struck me wasn't the regulatory uncertainty—that's nothing new. It was how the gaps between what developers build, what deployers control, and what regulators expect are creating real operational problems right now. Over the next few weeks, I'll be publishing a series pulling out patterns I saw across sessions on AI agents, state regulatory innovation, cross-functional governance, and federal preemption. These aren't conference recaps. They're field notes on where the handoffs are breaking down and what that means for teams trying to ship responsibly.

The session on agentic AI and privacy exposed a fundamental mismatch: our privacy frameworks were designed for static data processing, not autonomous systems that operate persistently across multiple contexts.

Traditional privacy law assumes you can get informed consent at meaningful moments—when data gets collected, used for new purposes, or shared. Agents don't work that way. A demo from an earlier session showed an agent booking travel that asked permission every few seconds. "Can I access your email? Can I search flights?" Users reflexively clicked "yes" to everything just to complete the task. The legal requirement gets satisfied. The meaningful control is fiction.

The same breakdown happens with purpose limitation, data minimization, and transparency. Privacy law requires collecting data for specified purposes only. But agents are designed to adapt, learn, and take steps that weren't explicitly specified. They need memory to function and context from past interactions to avoid mistakes. Where does legitimate purpose end when the system figures out its own path? And when decisions emerge from complex interactions rather than predetermined rules, explanations that satisfy legal requirements may not provide meaningful transparency about what actually happened.

This connects to the upstream-downstream divide: model developers build agents with constraints, deployers configure them for business contexts, but neither can fully control how agents behave once operating autonomously. Who's responsible for privacy compliance when the agent makes unexpected decisions?

The answer can't be "add more oversight." Human-in-the-loop works for high-stakes decisions, not for agents handling routine tasks at scale. So what works? Five patterns from practitioners:

Scope limitation: Define what the agent definitively cannot do. Build technical controls that prevent access to certain data categories regardless of what the agent determines might be useful.

Contextual consent: "This agent can access work email for scheduling" rather than asking permission for every action. More coarse-grained but more meaningful.

Continuous monitoring: Systems that detect when behavior drifts from expected patterns. Real-time visibility into what the agent is actually doing, not just what it was configured to do.

Graceful degradation: Tiered permissions that grant access to progressively more sensitive data as tasks require it. Mechanisms for agents to escalate to humans when they hit edge cases.

Explainability by design: Build audit trails into agent architecture. Log intermediate steps and data accessed along the way.

These aren't perfect. They're compromises between the privacy frameworks we have and the technical capabilities we're deploying. But they're operational.

The broader question is whether privacy law itself needs to evolve. The individual control model assumes users have context to make informed choices and tools to enforce them. Agentic AI challenges both. The alternative framework is emerging: shift from individual control to systemic accountability. Focus less on user consent for each action and more on organizational responsibility for agent behavior. Require demonstrable oversight mechanisms, not just privacy policies.

That shift is already happening in AI safety frameworks—red teaming, adversarial testing, ongoing monitoring. Privacy regulation is moving that direction too, particularly in California's approach to automated decision-making and the EU's GDPR accountability structure.

For legal and product teams, the question is whether to wait for legal clarity or start building for where this is heading. Teams that wait will retrofit privacy controls. Teams that build now will architect agents that can scale as regulatory expectations evolve.