PSR Field Report- Privacy governance as product enabler, not department of no
In October I spent two days at IAPP Privacy. Security. Risk. 2025 in San Diego, watching 500+ practitioners try to solve problems that didn't exist two years ago. The conversations kept circling back to a tension I've written about before: we're building AI systems faster than we're building accountability structures around them. What struck me wasn't the regulatory uncertainty—that's nothing new. It was how the gaps between what developers build, what deployers control, and what regulators expect are creating real operational problems right now. Over the next few weeks, I'll be publishing a series of posts pulling out the patterns I saw across sessions on AI agents, state regulatory innovation, cross-functional governance, and the federal preemption fight. These aren't conference recaps. They're field notes on where the handoffs are breaking down and what that means for teams trying to ship responsibly.
The most practical session at PSR 2025 wasn't about what privacy teams should do. It was about how to actually get it done when everyone's racing to ship AI products.

Jessica Hayes from CHG Healthcare presented a governance framework that teams actually want to use. Not because it checks compliance boxes, but because it helps the product move faster with confidence.
Now, CHG isn't subject to HIPAA, but they handle data for healthcare providers and patients. They face HR laws and staffing regulations. And like every company right now, their teams are integrating AI into workflows at speed. The question wasn't whether to govern AI—it was how to govern it without becoming the bottleneck.
The answer required a mindset shift that applies far beyond healthcare. The first shift: governance requires political skills, not just legal expertise. Jessica's observation echoed Julie Brill's keynote point—Brill had more influence over ensuring technology was trusted at Microsoft than she did at the FTC. That's the reality of effective in-house work. You're not issuing regulations. You're negotiating with P&L owners who have revenue targets, engineering leads on tight deadlines, and product managers trying to ship features.

Understanding their pressures isn't optional. It's the only way governance becomes operational rather than ornamental. Jessica described it as learning to ask: "If you were the doctor, how would you want your information used?" That reframing—from compliance obligation to user trust—changes the conversation entirely.
The second shift: from episodic assessments to continuous adaptation. Traditional compliance operates in cycles—annual audits, periodic reviews, point-in-time risk assessments. AI systems don't work that way. They evolve continuously. Data sources change. Models get updated. Use cases expand. Governance frameworks that rely on static snapshots fail.
The alternative: systems that monitor in real-time, adapt controls dynamically, and surface risks as they emerge rather than after incidents occur. Sheila Wright, partnering on the presentation, connected this to the shadow AI challenge. It mirrors shadow IT from a decade ago—people use tools that help them work efficiently, whether sanctioned or not. The response isn't lockdown. It's making approved tools as useful as the alternatives while building in guardrails by design.
The third shift: making data journeys visible. You can't govern what you can't see. The "data journeys" concept cuts through abstraction: where does data originate, where does it flow, how is it transformed, what decisions does it inform, who has access at each stage. Without that visibility, governance becomes guesswork. With it, you can build controls that match actual system behavior rather than documented intent.
The framework presented was a 10-stage model built through collaboration with companies actively deploying AI: Define purpose and ownership. Build visibility into data flows. Align system behavior with requirements. Secure access and prevent leaks. Monitor outputs and adapt controls. Each stage reinforces the others, creating governance that scales with technology rather than lagging behind it.

But organizations that embed governance early don't just reduce risk—they enable faster, more confident innovation. When product teams know the guardrails, they can move quickly within them. When legal understands the technical constraints, they can offer solutions instead of just flagging problems. When everyone speaks the same language about data flows and decision points, cross-functional collaboration actually works.
For legal and product teams, the operational question is whether governance lives in a policy document or gets embedded in the systems themselves. Documents don't prevent data leaks. Architecture does. Policies don't catch biased outputs. Monitoring systems do. Guidelines don't ensure purpose limitation. Technical controls do.