The power vs. control tradeoff in workflow design (now with Agents!)
Agents give you power—the autonomy and flexibility to handle ambiguous or dynamic tasks. Workflows give you control—the structure, reliability, and traceability you need for predictable, auditable processes.
Teams building AI systems keep getting stuck on this question: should they use autonomous agents or structured workflows? The real answer is both. The most defensible AI systems combine these two approaches, using agents where you need flexibility and workflows where you need control.
This matters for governance because AI systems are non-deterministic by nature. You can't force them to be perfectly predictable, but you can design them to be traceable and observable. That's where composition comes in. By thoughtfully combining agent autonomy with workflow structure, you create systems that are both capable and auditable—the combination regulators and production teams actually need.
Understanding how these components work together is more than backend architecture; it's about building AI systems that can operate reliably at scale while meeting customer expectations and compliance requirements. The organizations getting this right aren't choosing one primitive over the other. They're learning how to compose them.
Two Primitives, One System
Before we can compose agents and workflows into a governed system, we need a clear picture of what each one actually does. Think of them as two different tools in a toolkit—each designed for different jobs, but most powerful when used together.
Why "Primitives"? I'm borrowing the term "primitive" from computer science, where it refers to the most basic, irreducible building blocks of a system—the foundational elements that can't be broken down further and from which everything else is composed. Agents and workflows aren't just two types of AI systems among many; they're the two fundamental patterns from which all governed AI implementations are built. You might chain multiple agents together, embed workflows inside agents, or orchestrate agents through workflows—but you're always working with combinations of these two elements. Understanding them as primitives helps us think architecturally about governance: if we can define clear trust boundaries and intervention points for these two building blocks, we can compose them into systems of arbitrary complexity while maintaining oversight and control.
Agents: The Decision-Makers
An AI agent is an autonomous entity that can reason, remember, and act. The easiest way to understand an agent is to think of it like a player in a turn-based game. You make a move, then the agent makes a move. The agent's move might be a simple response, or it might involve using a tool, calling an external service, or kicking off a more complex sequence of actions.
What makes agents distinctive is that they maintain state between turns. They remember what happened earlier in the conversation. They can decide which tools to use based on context. They iterate through a reasoning process until they reach a conclusion or hit a stopping point. This autonomy is their strength—and the source of their governance challenges.
Agents are defined by what they can remember and what they can do. That combination of memory and capability is where their power comes from.
Workflows: The Guardrails
A workflow is a structured pipeline that enforces a specific sequence of operations. If agents are like game players making strategic decisions, workflows are like the tech tree in a strategy game—you have to research bronze working before you can unlock iron working. Each step has prerequisites, and the system won't let you skip ahead.
Workflows chain steps together in a defined order, pass data from one step to the next, validate inputs and outputs, and handle errors predictably. This structure makes them ideal for any process where order of operations, data integrity, and deterministic execution matter.
The Fundamental Tradeoff? Not quite...
Choosing between an agent and a workflow is choosing between power and control.
Agents give you power—the autonomy and flexibility to handle ambiguous or dynamic tasks. Workflows give you control—the structure, reliability, and traceability you need for predictable, auditable processes.
The good news is that you don't have to choose. You can decide where you need an agent's flexibility and where you need a workflow's structure. A practical starting point is to begin with an agent for the overall task, then wrap workflows around any part of the process that needs tighter control or higher reliability.
Why Composition Matters
Building sophisticated AI systems isn't about choosing agents or workflows. It's about combining them. The organizations seeing the biggest returns start with rock-solid workflows before deploying agents.
Businesses are excited about AI agents, and they should be. But every company I've seen succeed shares one trait—they all built rock-solid workflows before deploying agents. Think of an Olympic relay race. You can have the world's fastest runners, but if they fumble the baton handoff, you lose. Same with AI agents in business. An agent might be powerful, but if it's operating within broken processes, it's going to drop the baton.
Agents and workflows are composable building blocks, not competing options. An agent can function as a single, auditable step inside a larger workflow. A workflow can serve as a tool that an agent calls when needed. Neither excludes the other.
This composition matters because it addresses the core challenge in AI engineering: non-determinism. Large language models don't produce the same output every time, even with identical inputs. You can't eliminate this variability, but you can manage it by placing agent autonomy where you need flexibility and workflow structure where you need control.
The result is systems more robust than either primitive alone. An agent wrapped in a workflow gains the constraints it needs to stay reliable. A workflow with agent capabilities gains the flexibility to handle ambiguous situations. The combination creates the traceability and observability that production systems—and regulators—require.
Why Business Workflows Break
Before adding AI agents to any process, it's worth understanding why existing workflows fail. Most business processes are messier than people realize. A typical approval process might require legal sign-off, finance review, and compliance input—and the sequence can repeat multiple times as stakeholders request changes. The process might start in a CRM, move to email and Slack, trigger video calls, and loop in multiple departments before completion.
These workflows break because the systems usually don't connect and lack context. Information gets lost in handoffs. Manual steps introduce delays and errors. Nobody has visibility into where things stand or what's holding them up. When you deploy an AI agent into this environment, you're asking it to navigate chaos. The agent might be brilliant at its core task, but it can't fix a fundamentally broken process.
The organizations seeing the biggest payoffs from AI aren't just deploying agents—they're building structured, end-to-end workflows where documents are generated, routed for approval, and integrated with downstream systems without manual intervention. The agents operate on platforms that know how to route decisions and trigger actions. Instead of navigating chaos, they execute within clear boundaries.
Four Patterns for Combining Agents and Workflows
Several compositional patterns have emerged as best practices for building reliable, governable systems.
Agent Networks. A routing agent coordinates multiple specialized sub-agents, each with its own expertise. The router decides which agent should handle each part of a task, creating a clear hierarchy of responsibility that can be logged and audited. Think of it like a department head delegating to specialists.
Workflows as Agent Tools. An agent can invoke a multi-step workflow as one of its tools. For example, an agent helping plan a trip might call a booking workflow that checks availability, confirms pricing, and processes payment in a structured sequence. The agent handles the conversational reasoning; the workflow handles the transaction with full auditability.
Agent Handoffs via Workflows. When a task passes from one agent to another, a workflow can manage the transition. This ensures context, data, and state transfer correctly and traceably, preventing information loss and creating an evidentiary record of the handoff. This is the baton pass—get it wrong, and even the fastest runners lose the race.
Tool Filtering. A workflow can control which tools an agent has access to at each step. This prevents the failures that occur when agents receive too many options at once and enforces least privilege at the architectural level. The agent still reasons autonomously within each step, but the workflow constrains its available actions.
Why Non-Determinism Changes Everything
Workflows are more important in AI engineering than in traditional software development because non-determinism is fundamental to how large language models work. A conventional software function called with the same inputs will produce the same outputs. An LLM will not.
This variability shifts the burden of proof onto the system operator. Because output can vary, the ability to trace a process and understand exactly what happened becomes essential. Without a clear audit trail, demonstrating that an agent's actions were compliant—and not negligent—becomes nearly impossible in a post-incident investigation.
Workflows provide this essential observability by breaking a process into discrete steps that create a verifiable record of execution. Each step can be logged, each input and output captured. When something goes wrong, investigators can reconstruct exactly what happened and why.
The Case for Readable Code
Creating audit trails is only half the battle. Those trails have to be readable by the people who need to verify them. Workflow design choices become governance decisions, not just engineering preferences.
When you compose agents and workflows into production systems, the resulting code becomes your evidence. In a post-incident investigation or regulatory audit, that code needs to tell a clear story about what the system was designed to do and why it behaved the way it did. Some workflow frameworks require developers to construct processes using abstract concepts like graphs, nodes, and edges. Others use a more intuitive syntax where you can read the code from top to bottom and immediately understand the flow.
The argument against graph-based APIs is pragmatic: workflow code should read like instructions, not like architectural blueprints. Consider the difference between these two ways of expressing the same process. Both are actual, executable Python code that accomplish identical outcomes—validating input, processing data, and sending a result—but they approach the task very differently:
Graph-based approach:
# You have to think like a computer scientist
workflow = Graph()
node1 = workflow.add_node("validate_input")
node2 = workflow.add_node("process_data")
node3 = workflow.add_node("send_result")
workflow.add_edge(node1, node2)
workflow.add_edge(node2, node3)Readable/linear approach:
# You can read it like instructions
result = validate_input(data)
processed = process_data(result)
send_result(processed)The first approach requires you to explicitly construct a graph data structure—creating nodes for each step, then creating edges that connect them, forcing the computer to interpret this structure to determine execution order. The second approach simply executes line by line, top to bottom. The order is immediately obvious just from reading it. It's the difference between saying "Create waypoint A at Main Street, create waypoint B at Oak Avenue, create connection from A to B" versus simply saying "Drive down Main Street, then turn onto Oak Avenue."
This distinction impacts your governance model, because compliance officers and legal teams need to verify system behavior without requiring an engineer to translate. When logic is obscured in graph structures, you've created an expert-dependent system where only certain people can confirm the system operates as intended. That undermines the entire premise of cross-functional oversight—if your Trust Council can't read the code, they can't verify compliance.
A readable, fluent syntax—where control flow is immediately visible—makes systems more auditable by default. An auditor cannot verify a process they cannot comprehend. A system whose logic is not self-evident is a system that is inherently difficult to defend.
System legibility directly enables the forensic analysis and regulatory audit that non-deterministic systems require. In the event of an incident, a clear codebase drastically simplifies determining causality. Readable logic serves as demonstrable proof of compliance, showing regulators not just the outcome but the precise process that led to it. When you're composing agents and workflows into complex systems, that readability compounds—each additional layer of composition either clarifies or obscures the system's behavior. Getting this right at the primitive level makes everything built on top of it more defensible.
What This Means for Governance Teams
For legal and governance teams, a primary concern in AI systems is ensuring that autonomous operations are auditable, compliant, and operate with sufficient controls to prevent harm. The deliberate composition of workflows within agentic systems provides the technical foundation for this oversight.
Workflows create the immutable ledger of operations that establishes provenance. When workflows manage agent handoffs, tool selection, or multi-step data processing, they inherently document the execution path. This record demonstrates not just what an agent did, but why it took a certain action—establishing the chain of causality essential for non-repudiation.
Beyond audit trails, workflows function as a proactive control plane. They can validate LLM outputs before they're acted upon, enforce business rules, and ensure agents only use approved tools for sensitive tasks. This is the practical application of adding control where an agent's autonomy creates unacceptable risk.
By wrapping autonomous components in structured, observable workflows, organizations implement the guardrails necessary to deploy powerful AI agents responsibly.
The Bottom Line
If 2025 was the year of the enterprise agent, 2026 will be the year of the workflow. The discourse around building AI systems is shifting away from a binary debate and toward a compositional model that strategically balances autonomous power with auditable control.
For organizations deploying these systems, the most immediate action is to evaluate current business processes and identify areas where unstructured autonomy introduces risk that could be mitigated by workflow-based control. But before you deploy that impressive new agent, ask a harder question: are your underlying workflows ready for it?
This is a young field where practice is evolving faster than theory. The composition patterns described here will undoubtedly mature and standardize as more teams put them into production. But the core insight is already clear: the organizations building the most defensible AI systems aren't choosing between agents and workflows. They're learning how to compose them. And they're getting the workflows right first.
Because it doesn't matter how fast your runners are if they can't pass the baton.
Concepts drawn from Sam Bhagwat's presentation 'Agents vs Workflows: Why Not Both?' by Sam Bhagwat of Mastra.ai, delivered at the AI Engineer conference.