MIT predicted AI failure, but agents are already running in production
G2 data shows 60% of companies have AI agents in production with under 2% failure rates—contradicting MIT predictions of 95% project failure. For legal teams, this means governance frameworks can't wait for academic consensus when systems are already deployed.
G2's numbers paint a different picture than recent academic forecasting: nearly 60% of companies have AI agents in production, with under 2% failing once deployed. Compare that to MIT predictions that 95% of AI projects would fail, and you see a disconnect between research models and what's happening in production environments.
This matters for legal teams because it changes the risk profile. We're not talking about experimental technology with high failure rates. We're talking about systems that companies have deployed widely, which means the failure mode isn't "the system doesn't work." It's "the system works, but we haven't figured out the governance model yet."
The more interesting question is what accounts for the gap. Are companies deploying narrower, more constrained agents that avoid the failure patterns MIT predicted? Are they iterating faster than research cycles can track? Or are we measuring different things—research looking at ambitious AGI-adjacent projects while companies deploy focused workflow automation?
For product counsel, the takeaway: governance can't wait for academic consensus. If the deployment train has left the station, legal frameworks need to be on board, not catching up.