Unified governance framework solves AI compliance fragmentation through the engineering discipline

Credo AI's Unified Control Framework maps 42 controls to multiple risks and regulations simultaneously, reducing governance fragmentation through bidirectional mappings and concrete implementation guidance.

7 min read
Unified governance framework solves AI compliance fragmentation through the engineering discipline
Photo by Shane Rounce / Unsplash

Eisenberg, Ian W., Lucía Gamboa, and Eli Sherman. "The Unified Control Framework: Establishing a Common Foundation for Enterprise AI Governance, Risk Management and Regulatory Compliance." Credo AI, March 11, 2025

I think this Credo AI research represents the first serious attempt to solve AI governance fragmentation through engineering discipline, and it could fundamentally change how product teams approach compliance architecture decisions.

The numbers tell the story of why we need this. Organizations trying to comply with emerging AI regulations face a patchwork of requirements across jurisdictions—the EU AI Act, Colorado's SB 24-205, South Korea's AI Basics Act—that address similar underlying concerns but use different terminology and specific mandates. Meanwhile, risk management frameworks focus on isolated domains without comprehensive enterprise coverage, creating both gaps and redundancies when companies try implementing multiple approaches simultaneously. The Credo AI team, led by Ian Eisenberg, Lucía Gamboa, and Eli Sherman, tackles this complexity head-on with their Unified Control Framework.

The UCF's core innovation lies in creating bidirectional mappings between three components: a risk taxonomy covering 15 risk types with 50 specific scenarios, structured policy requirements derived from regulations, and a unified library of 42 controls. Each control can address multiple risk scenarios while satisfying multiple regulatory requirements simultaneously. For example, "Establish AI system documentation framework" mitigates risks like "Opaque system architecture" and "Over or under-reliance and unsafe use" while satisfying documentation requirements across the EU AI Act, Colorado AI Act, and ISO/IEC 42001 standards.

This many-to-many mapping approach directly addresses what the authors identify as the core problem: organizations ending up with their own internal governance patchwork that combines elements from multiple frameworks with organization-specific requirements. While this satisfies immediate goals, it lacks long-term vision and requires substantial effort when approaches need updating in response to changing risks or technological advances. The UCF provides the conceptual infrastructure for avoiding this fragmentation trap.

The validation work proves the framework's practical utility. When mapped against Colorado's AI Act, the UCF addressed 13 of 14 policy requirements using existing controls, with only one gap around general incident reporting that led to adding a 42nd control. The interactive visualization they've created shows how individual controls often serve multiple governance purposes—Control-022 "Implement adversarial testing and red team program" maps to eight separate risk scenarios, while foundational controls like "Establish user rights and recourse framework" address compliance requirements across multiple jurisdictions.

For product teams, the implementation guidance represents a significant step toward operationalizable governance. Each control includes detailed specifications covering integration points with existing tools like MLFlow and GitHub, common pain points organizations face, required actions with specific stakeholder assignments, and evidence requirements for compliance verification. The simplified example they provide for "Establish AI system access controls" shows how abstract requirements translate into concrete technical architecture decisions about identity management solutions, logging tools, and RBAC implementation.

The business case becomes compelling when you consider the efficiency gains from unified governance. Instead of maintaining separate compliance processes for each jurisdiction, companies can deploy controls that automatically generate jurisdiction-specific reports from common data sources. Rather than having different teams work on overlapping risk mitigation and compliance activities, the framework identifies natural synergies that reduce duplication while maintaining comprehensive coverage. The structured approach also provides a foundation for automation through governance platforms that can integrate with MLOps workflows.

The risk taxonomy reveals how comprehensive modern AI governance needs to be. Beyond technical concerns like "Performance & Robustness" and "Security," the framework addresses organizational risks like "Integration challenges with existing systems" and emerging concerns like "AI pursuing its own goals in conflict with human goals or values." The taxonomy spans 15 risk types with approximately 50 specific scenarios, each including detailed descriptions of potential consequences, affected stakeholders, contributing factors, and concrete examples. This granularity enables organizations to identify relevant risk pathways while maintaining clear connections to higher-level categories that align with organizational functions.

The methodology combining human expertise with AI assistance for risk analysis provides a scalable approach for keeping the framework current. The authors used Claude to analyze semantic relationships between risk descriptions, generate clusters of conceptually related risks, and evaluate descriptions against objective criteria for clarity and enterprise relevance. This computer-assisted approach enabled systematic analysis of large risk corpuses while maintaining human oversight of taxonomic structure, suggesting a model for how governance frameworks can evolve alongside technological capabilities.

The framework's structured design naturally supports programmatic implementation through governance platforms. While manual implementation remains possible, software tooling can significantly accelerate adoption by automating control configuration based on governance context, integrating with existing development workflows, and streamlining evidence collection for compliance. This automation potential addresses key adoption barriers by reducing required expertise, minimizing manual overhead, and ensuring consistent application across organizations.

Looking ahead, the UCF's most important contribution may be enabling proactive rather than reactive governance. Organizations can implement relevant controls now, confident their efforts won't become obsolete as regulatory environments evolve. Rather than repeatedly redeveloping governance approaches, companies can iteratively refine existing practices as standards mature. The framework provides the foundation for governance that scales with AI initiatives while maintaining rigorous oversight, particularly important as enterprises face pressure to accelerate AI adoption while maintaining responsible practices.

The limitations the authors acknowledge—lack of risk quantification, reliance on human-centered approaches, and need for continuous framework evolution—suggest areas where additional research and practical validation will be critical. However, the core insight that unified governance can reduce complexity while improving coverage provides a compelling direction for organizations struggling with AI governance fragmentation.

For legal and product teams, this research offers a roadmap for building governance capabilities that anticipate regulatory requirements rather than reacting to them. The framework's emphasis on concrete implementation guidance, technical integration, and evidence-based compliance creates opportunities for organizations to demonstrate sophisticated AI governance as a competitive advantage rather than just compliance overhead.

The Unified Control Framework: Establishing a Common Foundation for Enterprise AI Governance, Risk Management and Regulatory Compliance
The rapid adoption of AI systems presents enterprises with a dual challenge: accelerating innovation while ensuring responsible governance. Current AI governance approaches suffer from fragmentation, with risk management frameworks that focus on isolated domains, regulations that vary across jurisdictions despite conceptual alignment, and high-level standards lacking concrete implementation guidance. This fragmentation increases governance costs and creates a false dichotomy between innovation and responsibility. We propose the Unified Control Framework (UCF): a comprehensive governance approach that integrates risk management and regulatory compliance through a unified set of controls. The UCF consists of three key components: (1) a comprehensive risk taxonomy synthesizing organizational and societal risks, (2) structured policy requirements derived from regulations, and (3) a parsimonious set of 42 controls that simultaneously address multiple risk scenarios and compliance requirements. We validate the UCF by mapping it to the Colorado AI Act, demonstrating how our approach enables efficient, adaptable governance that scales across regulations while providing concrete implementation guidance. The UCF reduces duplication of effort, ensures comprehensive coverage, and provides a foundation for automation, enabling organizations to achieve responsible AI governance without sacrificing innovation speed.

TLDR: The Unified Control Framework (UCF) is proposed as a solution to the urgent need for effective enterprise AI governance, addressing fragmentation in current risk management, regulatory compliance, and implementation guidance approaches. Existing methods suffer from isolated risk domains, varied regulations across jurisdictions, and high-level standards that lack concrete operational instructions, leading to increased governance costs and a perceived conflict between innovation and responsibility.

The UCF offers a comprehensive and efficient solution built upon three core components:

• A synthesized risk taxonomy covering 15 types and approximately 50 specific scenarios, spanning both organizational (e.g., Performance & Robustness, Third Party) and societal concerns (e.g., Societal Impact, Environmental Harm). This taxonomy was developed by synthesizing major risk frameworks like MIT AI Risk Repository, NIST’s AI Risk Management Framework, and IBM’s AI Risk Atlas, with the aid of AI for semantic analysis and expert review to ensure it is mutually exclusive and collectively exhaustive (MECE).

• A library of "policy requirements" that translates AI-relevant regulatory texts and standards into structured, goal-oriented statements, outlining "what" needs to be achieved in AI governance.

• A parsimonious set of 42 controls, representing actionable governance processes. These controls are designed to simultaneously address multiple risk scenarios and compliance obligations, offering dual-purpose efficiency. Each control includes detailed implementation guidance to bridge abstract requirements with concrete actions, adaptable to specific use cases through configurations.

The framework establishes bidirectional mappings between these components, allowing organizations to efficiently select a tailored set of controls to mitigate relevant risks and meet compliance requirements. This unified approach significantly reduces duplication of effort and ensures clear traceability, simplifying how organizations can demonstrate adherence to both internal risk management and external regulations. For instance, a single documentation control (CONTROL-009) can support policy requirements from the EU AI Act and the Colorado AI Act, while simultaneously mitigating risks like 'Opaque system architecture'.

The UCF’s effectiveness was validated by mapping it to the Colorado AI Act, demonstrating its comprehensiveness and adaptability. This validation process also led to the addition of a general incident response control (CONTROL-042), ensuring the framework covers a broader range of needs. The development of the framework involved iterative synthesis, expert review, and the use of AI tools (Claude, GPT-4o) for content generation and refinement, particularly for implementation guidance.

While primarily a human-centered framework, the UCF's structured design provides a strong foundation for automation and technical integration, enabling more efficient control configuration and evidence collection based on specific governance contexts. Future work aims to enhance the framework by incorporating quantitative measures of risk mitigation, developing more context-specific automation, and ensuring its continuous evolution in response to new technological capabilities and regulatory changes. Ultimately, the UCF strives to make comprehensive AI governance more accessible and efficient for enterprises, thereby enabling responsible AI development and deployment without sacrificing innovation speed.