AI writes code in hours, but teams still need days to review it

AI agents can build software in hours, but the constraint has shifted—it's no longer writing code, it's auditing what gets produced. Teams need new processes for reviewing AI-generated systems.

1 min read
AI writes code in hours, but teams still need days to review it
Photo by Akram Huseyn / Unsplash

Mark Ruddock built a software platform during a six-hour flight. The application included 50 React components, enterprise integrations, and security configurations—work that typically takes an 18-person team weeks. He used what VentureBeat's Matt Marshall calls "agentic swarm coding."

OpenAI's latest models now solve 75% of GitHub issues, up from 58% months earlier. These aren't straightforward tasks—they require planning and context for debugging. The technology has moved past experimental phases into something teams can actually ship.

The tradeoff is becoming clear for product and legal teams: the verification burden has grown substantially. Ruddock admits some days his AI agents are "brilliant" and others they're "freaking shit heads"—but he won't know which until after they've built everything. The constraint isn't writing code anymore; it's auditing what gets produced.

This requires new processes for reviewing AI-generated systems before they handle user data or connect to existing infrastructure. The speed is appealing, but the compliance overhead is substantial.

AI agents can build software in hours, but the verification challenge creates new risks for product teams deploying these systems.

https://venturebeat.com/ai/vibe-coding-is-dead-agentic-swarm-coding-is-the-new-enterprise-moat