AI agents are hitting reality checks faster than the hype suggested

Agentic AI fails due to unrealistic expectations about automation capabilities, poor use case selection, data quality problems across multiple sources, and governance gaps requiring custom solutions.

1 min read
AI agents are hitting reality checks faster than the hype suggested
Photo by Duc Van / Unsplash

Derek Ashmore analyzed agentic AI deployments across multiple companies for The New Stack and identified why they keep failing. The problems fall into four categories. Teams set unrealistic expectations. They expect agents to handle emotionally nuanced tasks or deliver immediate results when the technology needs iterative development. Most tasks beyond routine automation still require human oversight.

Organizations pick the wrong starting points. Instead of beginning with well-defined tasks that have measurable outcomes, they try to automate every workflow at once. Data quality creates bigger problems than the usual "garbage in, garbage out" issues. Agents need access to structured databases and unstructured documents, but when information conflicts across sources, agents make wrong decisions—they can't interpret context the way humans do.

Oversight is inadequate. Most tools offer limited logging and auditing, forcing teams to build custom tracking solutions. This matters because agents take independent actions that affect system performance directly. When oversight fails, you get system failures, not just bad outputs.

4 Reasons Agentic AI Is Failing
Discover the several reasons why Agentic AI is failing to meet expectations.