Who owns the outcome when AI makes autonomous business decisions
Mission owners coordinate humans and AI agents, but when your agent makes the wrong autonomous decision, who bears legal liability? Before redesigning workflows, build the accountability architecture underneath.
Linda Mantia, Surojit Chatterjee, and Vivian Lee write in Harvard Business Review that most companies treat agentic AI like any other technology upgrade—bolt it onto existing processes and hope for efficiency gains. The authors, drawing from experience at Manulife, Google, and Harvard Business School, argue that approach misses what makes agentic systems different: they make autonomous decisions to achieve objectives, which means you need someone who owns those objectives across the entire customer journey, not just within departmental silos.
They point to NTT DATA, which applied agentic AI to RFP responses and estimates 70% time and cost reduction, and Hitachi Digital's HR operations as examples where mission owners coordinate both humans and AI agents. The prescription is clear: design workflows around outcomes, appoint mission owners with real authority, and break free from organizational silos.
What the article doesn't address: liability architecture
But here's a harder question: when your AI agent makes the wrong autonomous decision—books the wrong venue, approves a contract outside policy parameters, or overcommits on an RFP that creates binding obligations—who bears the legal liability? Mission ownership is organizational design. Liability is legal architecture.
Before appointing mission owners and redesigning workflows, product and legal teams need to answer: How do your contracts allocate responsibility when agents make business decisions? What escalation triggers are embedded in the product architecture? Can you reconstruct exactly why the agent made any given decision for regulators or litigation? Do your vendor SLAs, customer terms, and insurance policies contemplate autonomous decision-making?
The companies reorganizing around agentic AI without building accountability mechanisms underneath aren't just moving fast—they're distributing risk without managing it. Trust isn't built by coordination alone. It requires architecture.

