Five layers that separate real governance from compliance theater

Read with a highlighter. There are a lot of pages. Most of them earn their place.

4 min read
Five layers that separate real governance from compliance theater

There are hundreds of pages of AI regulation now. The EU AI Act. GDPR automated decision-making rules. CCPA. HIPAA. NIST AI RMF. ISO 42001. OWASP ML Top 10. Most organizations have read at least some of it. Very few have operationalized any of it in a way that actually holds together.

Noah Kenney's Governing Intelligence (Digital 520, 2026) is a practitioner's textbook — 440+ pages covering AI governance from first principles to sector deployment. The most useful contribution isn't the regulatory analysis (thorough as it is). It's the five-layer stack Kenney uses as an organizing framework. It gives teams something they desperately need: a way to think about governance as a system rather than a checklist.

Here's the stack:

Layer 1: Data Governance. The foundation. Data quality, lineage, classification, and access controls that feed AI systems. Most teams skip this or treat it as an IT problem. Kenney is blunt — governance failures propagate upward. If the data layer is weak, everything above it is compromised.

Layer 2: Model Governance. How models are built, tested, validated, and documented. This includes bias auditing, explainability requirements, and model cards. EU AI Act Article 13 (transparency) and Article 9 (risk management) both live here. If you're building a high-risk AI system under the Act's classification, your conformity assessment needs evidence from this layer.

Layer 3: System Integration Governance. What happens when the model connects to enterprise systems, APIs, and third-party data sources. This is where supply chain risk appears. A model that passes internal validation can still fail when integrated with external data feeds that weren't part of the testing environment.

Layer 4: Control and Monitoring Governance. Runtime oversight — who watches the system after deployment, what triggers review, and what the escalation path looks like when something goes wrong. This layer is where most governance programs fall apart. Organizations build rigorous pre-deployment review processes and then deploy systems into production with no ongoing oversight mechanism.

Layer 5: Audit and Evidence Governance. Documentation, recordkeeping, and the ability to reconstruct decisions after the fact. Not just for regulators — for internal accountability. When a model makes a consequential decision and you need to understand what happened, this layer is what makes that possible.

The insight that makes this framework useful isn't the layers themselves. It's the cross-layer dependencies. Kenney shows, through a worked credit decision system example, how a failure at Layer 1 (data quality) cascades through the stack: it poisons the model (Layer 2), creates integration errors (Layer 3), exceeds what monitoring can catch (Layer 4), and produces audit trails that look clean while the underlying problem persists.

This matters for product and legal teams for a specific reason: governance programs are typically built vertically. Legal reviews compliance. Security reviews infrastructure. Product reviews functionality. Nobody is reviewing how the layers interact — which is exactly where the regulatory risk concentrates.

The EU AI Act requires deployers of high-risk AI systems to implement risk management systems across the lifecycle. NIST AI RMF asks organizations to map, measure, manage, and govern AI risks. Both frameworks are describing something like Kenney's stack, but neither gives teams the cross-layer perspective they need to actually implement it.

What Kenney adds to the policy conversation: a way to run failure scenarios across the stack before you deploy, not after. The textbook walks through healthcare AI (FDA, HIPAA), financial services AI (GLBA, fair lending, model risk management), and government AI contexts — each using the stack as an analytical lens. The failure patterns that emerge explain where most governance programs break, regardless of industry.

A few things this book does well that typical AI governance writing doesn't:

First, it takes cybersecurity seriously as a governance problem, not a parallel track. Chapters 12-14 cover ML-specific attack vectors — adversarial examples, data poisoning, prompt injection, model extraction — in technical enough detail that product teams can have informed conversations with security teams about what risk management actually requires.

Second, it covers multi-jurisdictional complexity honestly. Chapter 6 maps regulatory divergence across the EU, UK, US, China, and emerging markets. The honest conclusion is that global harmonization is not happening on any near-term timeline. If your AI system touches multiple jurisdictions, you are building to the highest applicable standard or you are accepting meaningful compliance risk.

Third, the compliance program architecture in Chapter 16 is practical. Policy development, risk classification, training, incident response, regulatory monitoring — structured in a way that a team without dedicated AI legal resources could actually use.

The honest gap in the current version: it was written before OWASP's formal agentic AI risk taxonomy was widely circulated, and agentic systems (multi-agent workflows, autonomous decision chains) only appear at the edges of the framework. For teams deploying agents rather than static models, the stack needs a sixth layer — something like agent execution governance — that covers how autonomous action is scoped, supervised, and traceable. That's the extension I'd add.

For product counsel, this is a reference text, not a strategy document. You won't find competitive positioning guidance or advice on how to structure AI governance as a business advantage. What you will find is the most thorough practitioner's map of the current regulatory-technical-compliance terrain I've seen in a single document.

The credit decision walkthrough in Chapter 1.7 is worth reading on its own. It demonstrates exactly how an abstract governance framework becomes a concrete set of requirements when you apply it to a real system with real data flows, real regulatory obligations, and real accountability questions.

Read with a highlighter. There are a lot of pages. Most of them earn their place.

Source: https://digital520.com/Governing_Intelligence_Publication_Version_NMK.pdf