The AI Act Is Here. But It Wasn't Built for the AI That's Coming
The Law of Yesterday for the AI of Tomorrow
A new generation of AI is arriving. We're calling it "agentic AI"—systems that operate autonomously to achieve goals without step-by-step instructions. These aren't tools waiting for commands. They're agents that book your entire trip, manage complex projects, or conduct research by interacting with multiple digital tools on your behalf. The difference matters.
The EU just passed its AI Act, the first comprehensive framework for AI regulation. It's a risk-based system designed to ensure AI is safe and respects fundamental rights. Global first-mover. Real achievement.
The Act was designed for static AI tools with fixed inputs and outputs. The AI that's coming is fluid, autonomous, and unpredictable. Christine Saloustrou put it bluntly: is the AI Act a proven framework for this reality, or just a concept that hasn't been tested against truly autonomous AI? Yousefi et al. make the same point in their recent position paper.
The answer creates a chain reaction of regulatory challenges.
The "Intended Purpose" Paradox
The entire AI Act rests on one concept: the "intended purpose" as defined by the system's creator. A system intended for hiring? High-risk. For music recommendations? Low-risk. For conventional AI, this works.
For agentic AI, it falls apart.
An agent might be designed for a low-risk purpose like "assisting a user with daily tasks." But to achieve that goal, it could autonomously access a high-risk tool—say, a third-party credit scoring API—to find the best financial product. The creator's stated intent no longer predicts the system's actual risk.
The paradox challenges the law's foundation. Risk shifts from designer intent to emergent capabilities. Yousefi et al. identify what actually creates risk: the agent's capacity for "proactivity and goal decomposition," "persistent memory and learning," and "dynamic tool integration."
An AI system's functional capabilities—proactivity, memory, dynamic tool integration—define its risk, regardless of stated purpose.
The "Risk Jump" That Makes Small AI Dangerous
The intended purpose framework doesn't work, so we need a new way to assess risk. Right now, we assume major threats come from massive, computationally expensive models. The AI Act reflects this by designating General Purpose AI models above 10²⁵ FLOPS as carrying systemic risk.
Autonomy acts as a risk amplifier. A small AI model well below the high-risk threshold becomes systemically dangerous when deployed with high autonomy. Call it a "Risk Jump." A small model managing supply chain logistics for critical infrastructure becomes high-risk not because of its size, but because it makes independent, high-stakes decisions.
This forces a shift from regulating products to regulating processes that evolve continuously. Yousefi et al. propose three measurable indicators for when a smaller model should be treated as systemically risky: the degree of functional autonomy, the scope of impact, and the velocity of propagation.
Autonomy multiplies the risk of the underlying model. A harmless tool becomes a systemic threat.
The Illusion of "Meaningful Human Oversight"
Traditional risk classification is broken, which makes effective oversight more critical. But agentic AI makes it harder to achieve.
Article 14 of the AI Act mandates "meaningful human oversight" for high-risk systems to prevent or minimize harm. For traditional systems, a human reviews the output before finalization.
Christine Saloustrou questions whether "meaningful" oversight is even possible with agentic AI that operates in milliseconds or through complex "chains of thought" humans can't monitor in real-time. The risk of automation bias is high—human overseers will rubber-stamp the agent's decisions because of sheer speed and complexity.
The solution has to be technical, not just procedural. Yousefi et al. argue that oversight must evolve from reviewing logs after an event to building governance into the system's architecture. That means guaranteed technical controls allowing an operator to "pause, redirect, or shut down the agent" in real time. The shift is from retrospective review to dynamic intervention.
The Hidden Dangers of Digital vs. Physical Agents
When we imagine autonomous agents causing harm, we picture physical robots. But physical agents, with their visible actions and defined boundaries, are actually easier to oversee.
The unique risk comes from purely digital agents. Their danger lies in invisible, systemic, and rapid harm. An agent's ability to interact with the digital world—accessing APIs, modifying databases, changing server configurations—leads to what recent analysis calls "silent and rapid risk propagation across digital infrastructures." A factory robot malfunction is localized. A digital agent could silently corrupt financial records across a global network, with damage spreading before anyone notices.
The biggest threats from agentic AI won't come from a physical robot. They'll come from an invisible agent whose errors or malicious actions are nearly impossible to trace until it's too late.
A Static Law for a Dynamic Future
The EU AI Act is a landmark achievement in governing technology responsibly. It provides a vital foundation for ensuring AI is developed and deployed safely. But it was conceived for a previous generation of AI.
Its static, purpose-based framework struggles to contain the fluid, unpredictable, autonomous nature of agentic AI. The law's core assumptions about risk, oversight, and intent are being challenged.
This brings us back to the question: has the EU built a proof of concept robust enough to adapt, or is the AI Act a concept to prove against the coming wave of autonomy? As AI evolves, policymakers face a choice. They can design regulations that govern AI actions in real-time, or they'll be forced to regulate the consequences after the fact.
The law is here. The AI it was built for isn't the AI that's coming.