Beyond the Broccoli: How AI Governance Fills Your Trust Reservoir
Governance without narrative is just bureaucracy
For most organizations, AI governance feels like a plate of broccoli. You know you're supposed to do it, but it's often seen as a burdensome, bureaucratic task to be choked down in the name of compliance. It's the process that slows things down, the checklist that engineers resent, and the obstacle people try to work around.
But what if we've been telling the wrong story about governance? Andrew Sorkin, speaking at the Association of Corporate Counsel conference, offered a metaphor that reframes the entire conversation:
"Trust is a reservoir organizations fill during normal times."
Trust isn't something you build during a crisis—it's something you accumulate, drop by drop, through every consistent and responsible action you take. Every decision you make about your AI systems, from the data you use to the way you monitor performance, is either filling or draining that reservoir.
The reason governance feels like broccoli is because we've been telling the wrong story about it. Effective AI governance isn't about bureaucratic box-ticking. It's about telling a coherent and compelling story of trustworthiness to your engineers, your customers, and your regulators. Three mindset shifts can transform AI governance from a brake into an accelerator.
End the blame game with a "Trust Council."
When an AI system fails, the finger-pointing begins. "I thought Legal had it." "Product signed off on this." Everyone assumes someone else was responsible because, in reality, no one was. This is where accountability disappears into the "yada yada" of an organization's process—the part of the story where you can no longer explain how a decision was made, leaving dangerous gaps in oversight.
Create a Trust Council: a dedicated group with the defined authority to make decisions about AI risk. The council's power comes from its clarity. It makes clear who is responsible for preventing harm, who determines if a product is ready to ship, and who is accountable for managing AI risk. This council even has the "power to pause" a release if the risks exceed the organization's tolerance. This isn't obstruction—it's protection, a necessary action to protect the trust you've spent so long building. When teams know who has the authority to make these calls, they engage differently, and governance becomes principled rather than arbitrary.
Stop building from scratch—multiply what you already have
The common impulse is to treat AI as something so new and alien that it requires building expensive governance teams and processes from scratch. This makes the task feel overwhelming and creates resistance. But this "start from zero" approach overlooks something important: your organization already knows how to govern complex, risky systems.
The smarter approach is Resource Multiplication—using the governance capabilities you already possess. As Rajeev Ronanki noted at the TEDAI conference:
"If we keep calling it alignment governance, it's gonna sound like a plate of broccoli."
Instead of serving up something new and unpalatable, frame AI governance as an extension of the excellence you've already achieved.
Privacy assessments you conduct today are the foundation for AI Impact Assessments. Expand them to include questions about model bias, fairness, and explainability. Same process, expanded scope. Your current data governance can be extended to track the sources of AI training data, document its provenance, and manage inherent bias. You can adapt current infrastructure instead of building it all again. Your existing security controls and threat modeling processes can be adapted to defend against AI-specific threats like prompt injection, data poisoning, and model extraction.
This reframes governance from "starting from zero" to "expanding from strength." It makes your privacy and security teams feel valued for their expertise, not obsolete or defensive. When they hear their skills are the foundation for the future, engagement shifts from resistance to curiosity and capability.
Measure what matters: from abstract risk to human trust
Traditional risk assessment is often a lifeless exercise in calculating probability and severity, resulting in an abstract score that fails to communicate the real-world stakes. A "moderate risk rating" doesn't help an engineer or a product manager understand the potential impact on people. A Trust Assessment reframes the entire conversation by asking human-centric questions.
If this system makes a mistake, who gets hurt?
If this model has bias, whose opportunities disappear?
If this AI can't explain itself, who's left in the dark?
These questions ground governance in reality. A Trust Assessment evaluates your AI systems along four key dimensions of trust, connecting technical choices directly to their impact on people by mapping abstract risks to concrete trust dimensions:
- Safety Trust: Can we protect people from harm? (Connects to privacy and security risks)
- Equity Trust: Does this treat people fairly? (Connects to fairness and bias risks)
- Reliability Trust: Can people count on this? (Connects to performance and accuracy risks)
- Understanding Trust: Can people make sense of this? (Connects to transparency and explainability gaps)
This makes the stakes of governance clear to everyone involved. It becomes less about checking a compliance box and more about building systems that earn and deserve the trust of the people who depend on them.
Your governance is the story you tell
To transform AI governance, you need clear ownership through a Trust Council, innovative reuse of your capabilities by building on existing strengths, and the right lens for evaluation with a focus on human trust. These are the core components of a governance program that accelerates innovation by building confidence.
The most successful organizations will be those where governance becomes part of how they work, not something layered on top. Automated risk assessments are integrated directly into development pipelines. Real-time monitoring that detects model drift before it becomes a crisis. Predictive governance dashboards that transform oversight from a reactive burden into useful intelligence.
The systems and processes you build are only as effective as the narrative that supports them.
"Governance without narrative is just bureaucracy."
These practices are how you actively fill your organization's "trust reservoir" during normal times. By establishing clear accountability, building on your existing strengths, and measuring what matters, you ensure that the reservoir is full long before a crisis ever hits.
What story is your AI governance telling about you?