Chaining Tasks: AI task chains and the future of work

A new MIT/Microsoft/Yale paper finds AI's value doesn't show up at the task. It shows up at the chain

1 min read
Chaining Tasks: AI task chains and the future of work

A new MIT/Microsoft/Yale paper from Demirer, Horton, Immorlica, Lucier, and Shahidi argues something product counsel should sit with: AI's value doesn't show up at the task. It shows up at the chain.

The paper models work as sequences of steps. AI gets assigned in contiguous runs called chains. Three findings stand out:

  • AI-executed steps cluster together. Adjacency to an AI step makes the next step more likely to be AI-executed.
  • Dispersion kills automation. One human-only step in the middle of a sequence collapses the chain.
  • Comparative advantage logic breaks. Firms hand entire chains to AI even when humans do individual steps better, because every handoff carries review, validation, and adjustment cost.

That last finding is the one I keep circling. Our entire accountability stack was built around handoffs. Approvals, sign-offs, four-eyes review, audit trails — these exist because we assumed work moved between humans, with checkpoints in between. The paper says firms maximize value by removing those checkpoints.

Which means the question for product counsel is no longer "did the AI do this task correctly?" It's "who owns the chain?" When five steps run autonomously between the human input and the human output, attribution doesn't sit at any single step. It sits at the design of the chain itself.

That changes what governance has to do. Reviewing model outputs isn't enough. We need to review chain boundaries — where work enters AI control, where it exits, and what happens in between with no human looking. The unit of accountability has to match the unit of work.

https://peymanshahidi.github.io/assets/pdf/chaining_tasks_ai_automation.pdf