AI agent sprawl repeating API mistakes, delayed governance makes sense
Companies throwing AI agents at problems without oversight. 72% adoption, 75% governance concerns. Smart approach: experiment messy, add controls when agents need to coordinate with each other.
We're watching the same mistake play out again. Companies are throwing AI agents at problems without any real oversight, and it's going to blow up. Gravitee's numbers tell the story: 72% already using agentic AI, 75% worried about governance. That's not a great combination.
The API sprawl comparison hits home. Teams spin up agents for whatever they need, and suddenly you've got dozens of autonomous systems touching sensitive data with nobody watching. Gravitee's CEO thinks we'll see a major breach within two years—honestly, that feels generous.
Their four-stage model actually makes sense though: delay governance until you understand what you're building. Most companies are still figuring out where agents fit, so locking into frameworks too early just slows things down. Better to experiment messy, then add controls when you have multiple agents that need to coordinate.
What matters here is agent interaction, not individual agent security. Google's A2A protocol tackles the coordination problem that emerges when agents start talking to each other. For legal teams, this means shifting from "how do we stop AI" to "how do we see what it's doing." Because multiple agents are coming whether we're ready or not.
https://venturebeat.com/ai/scaling-agentic-ai-safely-and-stopping-the-next-big-security-breach/