I track AI agent development for work, so when I clicked on Cate Hall's TED Talk thinking it might cover autonomous systems, I was caught off guard. Her talk was about human agency—our capacity to act intentionally rather than drift through decisions.
The timing feels deliberate. As AI agents handle more of our routine tasks, from booking meetings to drafting contracts, the skills Hall identifies become more critical, not less. Her three principles translate directly to how product teams should think about AI integration.
First: assume everything is learnable. Don't default to "the AI handles this better" without understanding the underlying process. When counsel review AI-generated contracts, they need to grasp both the output and the reasoning that produced it. Second: court rejections by testing unreasonable AI applications. The boundaries aren't clear yet, so experimentation reveals both capabilities and limits faster than conservative approaches. Third: seek real feedback by examining AI outputs critically rather than accepting them at face value. In an agentic world, this means dissecting how the AI reached its conclusions, questioning whether you actually agree with the reasoning, and determining whether you should agree based on your own analysis—not just the AI's confidence level.
Hall ends with "unlock the doors inside"—a reminder that agency is about attention and intention, not just capability. For product counsel, that means staying present in the design choices that shape how AI integrates with human workflows. The future belongs to AI agents, but the future worth having will be shaped by humans who know when and how to take the wheel back.
The more autonomous our tools become, the more agency we need to apply in using them. Which means the question isn't whether AI agents will reshape how we work, but whether we'll stay deliberate enough to shape them back.