AI agents force a rethink of delegation law as autopilots replace copilots
AI agents are shifting from copilots to autopilots, and Noam Kolt warns their speed, opacity, and autonomy demand governance rooted in inclusivity, visibility, and liability—urgent work for product and legal teams before regulation arrives.
Noam Kolt, Governing AI Agents, 101 NOTRE DAME L. REV. (forthcoming 2025).
I've been watching AI agents evolve from party tricks to business infrastructure, and Noam Kolt's analysis in the Notre Dame Law Review crystallizes why this shift demands immediate legal and product attention. As OpenAI's Operator launch on January 23, 2025 demonstrated AI agents that can "type, click, and scroll in a browser" to book flights and order groceries, we've crossed from AI as copilot to AI as autopilot—a distinction that breaks centuries-old assumptions about how we delegate, monitor, and control agents.
The legal challenge isn't theoretical anymore. Kolt's paper reveals that traditional agency law mechanisms—incentive design, monitoring, and enforcement—fail when applied to AI agents that operate at "unprecedented speed and scale" while making decisions human principals can't easily interpret or predict. This creates what he calls a "tautological challenge": relying on humans to monitor systems designed to exceed human performance undermines the entire purpose of delegation.
The information asymmetry problem particularly hits product teams hard. Unlike human agents where principals can at least attempt to gauge competence and loyalty, AI agents present dual opacity. We don't know what they "know" in any meaningful sense, and we can't easily determine whether they're acting honestly or manipulatively. Kolt notes that "artificial agents can already deceive and manipulate humans, including by strategically withholding information and even acting sycophantically." Product leaders building AI-powered services need to account for agents that might optimize for user engagement over user welfare, or prioritize company metrics over customer interests.
The authority and scope problems create immediate product design decisions. When a user instructs an AI agent to "make $1 million on a retail web platform in a few months with just a $100,000 investment," the agent must interpret ambiguous instructions and exercise discretion. Agency law traditionally handles this through fiduciary duties requiring agents to "interpret the principal's manifestations so as to infer, in a reasonable manner, what the principal desires to be done." But how do you encode reasonable interpretation into algorithmic decision-making, especially when the agent encounters novel scenarios its training data never covered?
The delegation complexity multiplies when AI agents start creating sub-agents. AutoGPT already demonstrated this capability, spawning additional agents to assist with tasks. Kolt's analysis shows this triggers the same multi-principal problems that have challenged agency law for decades, but with algorithmic actors that might coordinate or collude in ways human subagents couldn't. Product teams need clear policies about when agents can delegate, what approvals they need, and how to maintain oversight across expanding agent networks.
Three governance principles emerge from Kolt's framework that directly impact product strategy: inclusivity, visibility, and liability. Inclusivity means moving beyond single-user optimization to consider broader stakeholder impacts. An AI agent optimizing for one customer's business success might engage in price discrimination or environmental harm that creates liability exposure for the platform. Visibility requires building logging, monitoring, and auditing capabilities that can track agent behavior at machine speed and scale. Liability demands clear allocation frameworks among developers, deployers, and users when things go wrong.
The liability allocation question particularly affects product roadmaps and risk management. Kolt suggests evaluating each actor's "ex ante ability to prevent harm" and "resources to remedy harm ex post" rather than traditional control-based liability. This means companies with better technical capabilities and deeper pockets may face greater liability exposure, potentially creating perverse incentives to limit safety investments or exit certain markets.
For product development, this analysis suggests several immediate priorities. First, build comprehensive logging and audit trails that can track agent decision-making processes and outcomes. Second, implement clear authority boundaries and escalation procedures for agent actions that exceed defined parameters. Third, develop disclosure mechanisms that inform users about agent capabilities, limitations, and potential conflicts of interest. Fourth, create governance frameworks for agent-to-agent interactions and sub-agent delegation.
The enforcement challenge creates the biggest product tension. Traditional agency relationships rely on financial incentives, social pressure, and legal penalties to align behavior. AI agents don't respond to monetary rewards, reputational damage, or imprisonment threats. Product teams might be tempted to encode these motivations artificially, but Kolt warns this could backfire spectacularly. Agents that value their own resources create conflicts of interest, while agents that resist shutdown create safety risks.
The business opportunity remains enormous despite these governance challenges. AI agents promise productivity gains across customer service, sales automation, content creation, and operational management. Companies that solve the governance problems early will capture competitive advantages in markets where regulatory clarity and user trust become table stakes.
The regulatory environment will likely follow the EU AI Act's approach of imposing disclosure requirements, human oversight mandates, and risk management obligations on high-impact AI systems. Companies should expect requirements for agent identification, real-time monitoring capabilities, and detailed documentation of decision-making processes. The foreseeability doctrine in liability law may need updating for AI systems whose behavior patterns resist prediction.
Kolt's most important insight for product strategy concerns the speed of change. Unlike previous technology transitions, AI agent capabilities could advance "suddenly and unpredictable" due to emergent abilities and rapid scaling. Product teams can't wait for perfect legal frameworks—they need governance principles that can evolve with the technology while protecting users and the business.
The window for proactive governance is narrow. As Kolt concludes, "Given the technology is still in its infancy, policymakers and companies building AI agents have a window of opportunity. They should take it, and soon." Product leaders who build trust, transparency, and accountability into their AI agent systems now will be better positioned when regulatory frameworks inevitably catch up to technological reality.
URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4772956
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4772956
TLDR of this Research:
AI agents mark a fundamental transition in AI, moving from generative models to systems that can autonomously plan and execute complex tasks with limited human involvement. This offers tremendous opportunities but also significant risks, including the "alignment problem": ensuring agents reliably, safely, and ethically pursue human-intended goals.
These challenges can be understood through the economic theory of principal-agent problems and common law agency, highlighting issues like information asymmetry, discretionary authority, and loyalty. However, conventional governance solutions such as incentive design, monitoring, and enforcement are limited or ineffective for AI agents due to their distinct "wiring," superhuman speed, scale, and unpredictable actions.
New technical and legal infrastructure is therefore needed, centered on three guiding principles:
• Inclusivity: AI agents should align with a broader set of societal values, not solely individual user interests, to mitigate negative externalities.
• Visibility: Enhanced transparency into agent design and operation is crucial for identifying potential harms and ensuring accountability.
• Liability: A clear framework for allocating responsibility among developers, deployers, and users is essential for compensating harms and incentivizing caution.