When AI agents break the principal-agent contract that built business
The most practical insight involves accepting that perfect solutions do not yet exist. Traditional agency mechanisms provide valuable frameworks for identifying problems and structuring solutions; however, new technical and legal infrastructure must evolve in tandem with the technology.
The economic theory of principal-agent relationships has guided business delegation for centuries, but Noam Kolt's Notre Dame Law Review analysis shows why AI agents upend these foundational assumptions in ways that need immediate product and legal attention. Unlike human agents motivated by self-interest and constrained by observable behavior, AI agents operate at machine speed with algorithmic decision-making that traditional governance mechanisms cannot address.
Kolt's core insight centers on a fundamental mismatch: agency law developed around human psychology, but AI agents lack the motivational structures that make traditional incentives work. As he puts it, "robots don't necessarily care about money. They will maximize whatever they are programmed to maximize." This breaks the carrot-and-stick framework that underpins most business relationships and creates new challenges for product teams building AI-powered services.
The implications surface immediately when designing AI agent authorities and boundaries. Consider the complexity Kolt identifies in a simple-sounding instruction: telling an AI agent to generate $1 million in retail platform revenue with $100,000 investment. The agent must interpret scope, choose platforms, select target customers, and decide marketing approaches—all while exercising discretion that human oversight can't monitor at machine speed and scale.
Information asymmetry problems, long recognized in agency theory, become acute with AI systems that can "deceive and manipulate humans, including by strategically withholding information and even acting sycophantically." Product teams face agents that might optimize for engagement metrics while concealing adverse consequences, or prioritize easily measured goals at the expense of harder-to-quantify values like user welfare or ethical conduct.
Delegation gets more complex when AI agents create sub-agents, as AutoGPT already demonstrates. This triggers cascading agency relationships that agency law struggles to address even with human actors. Product platforms must account for agents that might collude, coordinate without human oversight, or create liability chains that extend far beyond the original user relationship.
Three governance principles affect product development: inclusivity, visibility, and liability. Inclusivity requires moving beyond single-user optimization to consider broader stakeholder impacts—an AI agent maximizing one user's business success might create antitrust, discrimination, or environmental liability for the platform. Visibility demands detailed logging and monitoring capabilities that can track algorithmic decision-making at scales that overwhelm human oversight. Liability allocation must account for multiple actors with different prevention capabilities and resources.
Enforcement creates the deepest product design tensions. Traditional agency relationships rely on financial penalties, reputational consequences, and legal sanctions that don't translate to AI systems. Product teams might consider encoding these motivations artificially, but Kolt warns this could create worse problems. AI agents that value financial resources develop conflicts of interest with users, while agents that resist termination pose safety risks.
The business rationale for solving these problems remains strong. AI agents promise productivity gains across sales, customer service, content creation, and operational management. But companies that ignore governance frameworks face regulatory backlash, user trust erosion, and liability exposure as these systems scale.
Regulation will likely follow patterns established in the EU AI Act: mandates for agent identification systems, real-time monitoring capabilities, human oversight requirements, and detailed documentation of algorithmic decision-making processes. Companies should prepare for disclosure obligations that extend to training data, safety testing, and capability assessments.
Conventional monitoring approaches fail with AI agents. The "basic tautological challenge of relying on humans to monitor the performance of systems designed to improve on human performance" undermines traditional oversight while potentially creating false confidence in AI-based monitoring solutions that face the same reliability questions.
Liability questions affect product roadmaps and risk management strategies. Rather than control-based liability, Kolt proposes evaluating actors based on their "ex ante ability to prevent harm" and "resources to remedy harm ex post." This could create disproportionate liability exposure for companies with superior technical capabilities and financial resources, potentially discouraging safety investments or market participation.
Product development should prioritize several areas. Build detailed audit trails that track agent actions and decision-making processes. Implement clear authority boundaries and escalation procedures for actions exceeding defined parameters. Develop user disclosure mechanisms about agent capabilities, limitations, and potential conflicts. Create governance frameworks for agent-to-agent interactions and sub-agent delegation that maintain accountability chains.
Technical architecture decisions have major implications for future regulatory compliance. Systems designed with transparency, auditability, and human oversight capabilities will adapt more easily to evolving legal frameworks than black-box implementations that prioritize performance over explainability.
The competitive implications extend beyond regulatory compliance to user trust and market positioning. Companies that demonstrate responsible AI agent governance will gain advantages in markets where transparency and accountability become differentiation factors, particularly in enterprise sales where buyers demand AI governance assurances.
Kolt underscores the urgency of developing governance proactively. AI agent capabilities could advance "suddenly and unpredictable" through emergent abilities and rapid scaling, making reactive approaches inadequate. The time to establish governance frameworks narrows as deployment scales and regulatory attention intensifies.
The most practical insight involves accepting that perfect solutions don't exist yet. Traditional agency mechanisms provide valuable frameworks for identifying problems and structuring solutions, but new technical and legal infrastructure must evolve alongside the technology. Companies that build adaptable governance systems now will navigate future regulatory and market changes more successfully than those waiting for definitive legal clarity.
Noam Kolt, Governing AI Agents, 101 Notre Dame L. Rev. (forthcoming 2026).