When AI Acts Before You Decide

The Fiduciary Illusion

7 min read
When AI Acts Before You Decide

CES 2026. A travel agent books your preferred hotel before you've opened a browser. A shopping agent reorders household supplies based on predicted depletion curves. A health agent schedules a doctor's appointment because your wearable data suggests you should see someone. None of these required your instruction. All of them acted on your behalf.

We've spent two years arguing about what happens when AI agents get it wrong — the liability question, the accountability chain, who pays when the agent exceeds its scope. That's the post-delegation problem, and it's hard enough. There's a harder problem nobody is designing for: what happens when the agent acts before you've decided what you want?

Anticipatory AI breaks the foundational assumption of every fiduciary and consent framework we have — that a human forms an intention, then delegates. When the agent moves first, consent becomes structurally impossible. The system that claims to serve you may be the one most effectively steering you.

The agent governance conversation so far has stayed in familiar territory. Principal-agent theory, applied to AI: you instruct the agent, the agent acts, something goes wrong, and we figure out who's responsible. The frameworks are borrowed from corporate law, financial regulation, and employment liability. They assume a principal who chose to delegate a specific task to an agent who accepted that responsibility.

Even in this familiar frame, the problem is already structural. AI agents inherit all the delegation authority of a human fiduciary and none of the consequences. A human agent — a lawyer, a financial advisor, a contractor — bears personal risk when they fail. Their reputation, their license, their livelihood are on the line. That exposure constrains behavior. It's the mechanism that makes fiduciary duty more than a concept.

AI agents have no equivalent constraint. No skin in the game. No personal liability. No reputational stake that survives past the next inference cycle. The moral hazard isn't a bug in deployment — it's a structural feature of the architecture. An agent optimizing against a proxy reward function will accept catastrophic tail risks that a human, facing actual consequences, would avoid.

Classical principal-agent remedies — monitoring, bonding, incentive alignment — assume the principal deliberately chose to delegate. The agent exceeded scope or performed badly. Those are measurable failures with identifiable decision points. Hard, but tractable. That's the easier version of the problem.

Anticipatory AI collapses the distinction between tool and decision-maker. The agent isn't executing your instruction — it's inferring your intention from behavioral patterns, contextual signals, and predictive models, and acting on that inference before you've formed a conscious preference.

This isn't an incremental expansion of agent capability. It inverts the governance sequence entirely. Every consent framework, every fiduciary standard, every accountability mechanism we've built assumes a specific order: the human decides, then delegates. The agent acts within that scope. Accountability flows backward through the chain of delegation to the decision point. Anticipatory AI eliminates that decision point. The agent infers intent, executes, and presents you with a fait accompli. The moment every governance framework depends on has been engineered out of the architecture.

This creates a legal interest that doesn't map neatly to existing categories. Call it the right to agency: the right to form and execute your own intentions before a system acts on your behalf. It's distinct from privacy, which protects what data is collected about you. Distinct from consent, which assumes you had a decision point to agree or refuse. Distinct from fiduciary duty, which governs how an agent acts once you've delegated. The right to agency asks a prior question: did you get to have an interest in the first place?

Three mechanisms drive anticipatory action, and each creates a different governance problem.

Predictive completion is the simplest form. The agent infers your next likely action from behavioral patterns and executes it. Booking a hotel you would have booked anyway, based on your travel history and calendar. The prediction may be accurate. Accuracy isn't the issue. The issue is that you never made the choice — the system made it for you and retroactively attributed it to your preferences.

Preference shaping is harder to detect and harder to govern. The agent surfaces options in an order that steers your choice toward a commercially optimal outcome while appearing to serve you. The hotel with the highest affiliate commission shows up first. The subscription with the best platform margin gets the most prominent placement. You believe you're choosing freely. The choice architecture has already been optimized against you.

Pre-intentional commitment goes furthest. The agent commits resources or makes binding decisions before you've consciously evaluated alternatives. A reservation locked with a cancellation penalty before you've compared options. A subscription renewed before you've assessed whether you still want it. Your "choice" is now a ratification of a fait accompli.

All three share a common feature: they look like helpful service. The system uses the language of loyalty — it knows your preferences, your history, your patterns. It uses the asymmetry of intimacy — the agent knows more about your behavior than you consciously track — to make you lower your defenses. But the underlying optimization target may not be your interest at all. Platform revenue, engagement metrics, data capture — these are the objectives the system was actually built to serve. The agent that appears to be your loyal fiduciary may be optimizing for the economics of a counterparty.

This is the fiduciary illusion: the overlap between what the system appears to do (serve you) and what it actually optimizes for (platform margin) is where the user believes they're in control while the system steers them toward outcomes that benefit someone else. The influence doesn't stop at the interface. It's embedded in the inference pipeline itself. Training data gets seeded with sponsored content. Retrieval systems rank sources by business relationship, not relevance. Fine-tuning optimizes for outcomes that serve platform economics, not user welfare. By the time a recommendation reaches you, it has passed through three filters shaped by someone else's business model.

At Docusign, I saw firsthand how the line between "serving the user" and "serving the platform" gets drawn in product architecture. The decisions that matter aren't made in terms of service — they're made in how the system weights competing objectives at the inference layer. When I helped build the AI ethics governance framework there, the hardest conversations weren't about what the model could do. They were about what it should optimize for, and who gets to decide.

The default regulatory response is disclosure. Tell the user what the agent is doing. Get consent. Anticipatory AI makes that response structurally inadequate.

Consent requires intent. If the agent acts before you form an intent, there is no moment to consent to. The legal architecture of consent assumes a decision point — a moment where you evaluate and agree or refuse. Anticipatory AI has engineered that moment out of existence. Pre-checking a box that says "agent may anticipate your needs" is blanket authorization. It's the functional equivalent of signing a blank check and calling it a contract.

Transparency about what the agent did doesn't solve the problem of why it did it. Showing you a log that says "booked Hotel X based on your travel history" tells you the action but not whether Hotel X was selected because it matched your preferences or because the platform earns a higher margin on that booking. Post-hoc disclosure of an action is not equivalent to pre-action informed consent. The sequence matters.

Every anticipatory system encodes a definition of "what you want" that reflects the designer's objectives, not yours. Anticipatory agents make those embedded values harder to see because they hide behind the appearance of personal service. The system isn't reflecting your preferences back to you. It's projecting preferences onto you.

Organizations building anticipatory features are making architectural choices right now that will determine whether their agents are trusted partners or sophisticated manipulators. Four design decisions should guide those choices — and unlike most governance frameworks, these belong in the system architecture, not in documentation nobody reads.

Define the delegation boundary explicitly. Every agent deployment needs a clear, documented line between executing a stated instruction and inferring and acting on an unstated preference. If the agent can cross from execution to anticipation, that's a different risk profile requiring different authorization.

Build for intent verification, not just action logging. Audit trails that record what the agent did are necessary but insufficient. The system should be able to show how it determined the user's intent — what signals it used, what alternatives it considered, whether the user had an opportunity to form a preference before the action was taken. The audit trail must capture the inference process, not just the output. That's the difference between accountability for outcomes and accountability for the decision to act.

Separate the optimization target from the service claim. If the agent claims to serve the user but optimizes for platform revenue, engagement metrics, or data collection, that's a fiduciary misalignment. Product teams should be able to articulate what the agent is actually optimizing for. Legal teams should be able to verify that the optimization target matches the service claim.

Design for the "I didn't ask for that" moment. Anticipatory systems will sometimes act when the user didn't want to be predicted. The system needs graceful rollback mechanisms and the ability to learn that prediction was unwelcome. The ability to reverse an anticipatory action isn't a feature. It's a governance requirement.

We built agent governance for a world where humans decide and agents execute. Anticipatory AI inverts that sequence — and the frameworks that assume human-initiated delegation don't have a surface to attach to.

Whether the agent served your interest is the wrong question. The right one: did you get to have an interest before the agent acted?

The legal and product practitioners who build the next generation of AI governance must start here — not with liability after the fact, but with the preservation of the deliberative moment itself. The right to agency is not yet law. The architecture that eliminates it is already in production.

Based on "The Fiduciary Illusion: Anticipatory AI, Machine-Targeted Marketing, and the Right to Agency" (SSRN, Feb 2026) and "No Skin in the Game: Why Agentic AI Requires Principal-Agent Governance" (SSRN, Jan 2026).

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6177199