PSR Field Report- To adopt AI safely, the silos must fall

2 min read
PSR Field Report- To adopt AI safely, the silos must fall

In October I spent two days at IAPP Privacy. Security. Risk. 2025 in San Diego, watching 500+ practitioners try to solve problems that didn't exist two years ago. The conversations kept circling back to a tension I've written about before: we're building AI systems faster than we're building accountability structures around them. What struck me wasn't the regulatory uncertainty—that's nothing new. It was how the gaps between what developers build, what deployers control, and what regulators expect are creating immediate operational problems. Over the next few weeks, I'll be publishing a series of posts pulling out the patterns I saw across sessions on AI agents, state regulatory innovation, cross-functional governance, and the federal preemption fight—field notes on where the handoffs are breaking down and what that means for teams trying to ship responsibly.

The conference title—Privacy. Security. Risk.—highlighted something organizations haven't figured out yet. These used to be separate functions with clean org chart boxes. Privacy handled data collection. Security handled protection. Risk handled business impact.

AI scrambled that arrangement. An agent handling sensitive data creates a privacy issue, a security issue, and a business risk at the same time. You can't hand it to one team and move on. But most organizations act like you can. Privacy teams push for strict controls. Security wants minimal access. Product wants to iterate quickly. Legal wants documented approvals. Each goal makes sense on its own. But they don't naturally align, and when nobody has to reconcile them before shipping, you get the current mess: everyone doing the right thing in their lane while collectively building systems nobody can govern.

The fix isn't "communicate better" or "have more meetings." Those fail when the underlying incentives stay misaligned. What actually works: shared accountability where performance reviews depend on shipping safely, not shipping fast or staying compliant. Putting decisions in the hands of people who can actually evaluate them—legal shouldn't approve model deployments they can't assess. Technical systems that encode governance requirements, like Reliance AI's detection when training would violate privacy regulations. Infrastructure, not policy documents.

But none of that happens without leadership mandate. People doing the work can't break down departmental lines—they operate within whatever structure leadership creates. If the CEO or board doesn't require teams to work together on AI governance, it happens in pockets driven by personal relationships and disappears when those people leave.

The organizations that figure this out won't be the ones with the strongest privacy teams or security teams. They'll be the ones that make those teams work together with shared goals, shared tools, and shared accountability for outcomes that cut across traditional domains. The silos have to come down. AI moves too fast for anything else.

The silos have to fall. AI governance requires it.