Trust Isn't a Checkbox—It's Infrastructure: Lessons from IAPP PSR 2025

At IAPP PSR 2025, the pattern was clear: we're building AI systems faster than accountability structures.

4 min read
Trust Isn't a Checkbox—It's Infrastructure: Lessons from IAPP PSR 2025
Saima Fancy, Bret Cohen and Ken Priore

In October, I spent two days at IAPP Privacy. Security. Risk. 2025 in San Diego, watching 500+ practitioners try to solve problems that didn't exist two years ago. The conversations kept circling back to a tension I've written about before: we're building AI systems faster than we're building accountability structures around them.

What struck me wasn't the regulatory uncertainty—that's nothing new. It was how the gaps between what developers build, what deployers control, and what regulators expect are creating real operational problems right now. Over the next few weeks, I'll be publishing a series of posts pulling out the patterns I saw across sessions on AI agents, state regulatory innovation, cross-functional governance, and the federal preemption fight. These aren't conference recaps. They're field notes on where the handoffs are breaking down and what that means for teams trying to ship responsibly.

The convergence nobody's talking about

My panel on October 31st—alongside Bret Cohen from Hogan Lovells and Saima Fancy from Adobe—started with a straightforward observation: the EU AI Act, Colorado's AI law, and California's CCPA regulations on automated decision-making all land in the same place. They expect companies to risk assess, test, and monitor high-risk AI systems. They expect you to identify and explain why your systems reach certain results. They expect you sought to identify and mitigate reasonably foreseeable risks.

The regulatory surface area looks different across jurisdictions. The timelines vary. But the operational demands converge on a single requirement: demonstrate that someone's actually responsible for what your AI systems do.

That convergence matters because it shifts the conversation from "what does compliance require" to "how do we build accountability that scales." The FTC's Rite Aid case made this concrete—failures to test accuracy before deployment, prevent low-quality inputs, and monitor performance after launch. These weren't exotic AI safety concepts. They were basic operational hygiene that didn't happen.

Trust as the thing you build, not the thing you claim

The framework we presented centered on three pillars that treat trust as infrastructure rather than aspiration.

First: establish a Trust Council with actual decision-making authority. Not another committee. A cross-functional body that can stop releases when trust profiles are insufficient. The goal is killing the diffusion of responsibility—no more "I thought Legal was handling it" or "I assumed Product checked that."

Second: extend what you already have rather than building parallel structures. Your privacy impact assessments become AI impact assessments. Your data governance frameworks expand to cover training data provenance. Your cybersecurity controls integrate AI-specific threats like prompt injection and model extraction. This isn't just efficiency—it's recognizing that teams are already stretched thin.

Third: measure trust impact, not just technical risk. A moderate technical risk in a customer-facing system can have severe trust implications. A high technical risk in an internal tool might carry limited trust exposure. The context determines what matters. So governance decisions need to account for whether the system actually builds or erodes trust with the people who need to rely on it.

What I'm taking forward

The session that stayed with me wasn't mine. It was Jessica Hayes from CHG Healthcare demonstrating a governance framework that product teams actually wanted to use—not because it checked boxes, but because it helped them ship faster with confidence. The shift from episodic assessments to continuous monitoring. The emphasis on making data journeys visible. The recognition that governance requires political skills, not just legal expertise.

That connects to Julie Brill's observation from her Microsoft keynote: the role required more political skills than her DC policy work. You're not issuing regulations. You're negotiating with P&L owners, engineering leads, and product managers operating under different incentives and timelines. The most effective legal function becomes a bridge, translating risk into language each side can act on.

California's announcement of the DROP system—a one-stop deletion request platform launching January 1—showed what happens when regulators stop thinking about rights as individual burdens and start thinking about them as infrastructural capabilities. That's not just enforcement strategy. It's a preview of what privacy looks like when users have actual tools, not just additional rights on paper.

The agentic AI sessions surfaced the consent fatigue problem that everyone's experiencing but few are naming. An agent asking permission every few seconds. Users reflexively clicking yes just to complete tasks. Legal requirements satisfied, meaningful control eliminated. The frameworks we're using weren't designed for autonomous systems that operate persistently across multiple contexts.

And the upstream-downstream divide—model developers building capabilities, deployers configuring use cases, neither able to fully manage safety alone—keeps showing up as the structural problem underneath everything else. The handoff between them breaks down because current vendor agreements don't clearly allocate responsibility for AI system performance and impacts.

Where this is heading

Over the next few weeks, I'll be unpacking these patterns in detail. How California's approach to privacy infrastructure changes the compliance calculus. Why the developer-deployer divide creates governance gaps where everyone assumes someone else is handling oversight. What it actually takes to make governance frameworks that teams want to use rather than route around. How privacy principles designed for static data processing struggle when applied to agents that make autonomous decisions.

These posts won't be theoretical. They'll focus on where the operational breakdowns are happening and what teams are doing about it right now. Because the most useful insight at PSR wasn't in the keynotes—it was in the hallway conversations where practitioners admitted what's actually broken and shared what's actually working.

For now, the core takeaway: trust infrastructure isn't overhead. It's the foundation that lets everything else scale. The companies treating it as a bolt-on compliance exercise are building technical debt they'll spend years unwinding. The companies architecting it from the beginning are building the competitive moat that matters when everyone has access to the same models.