AI agents will trap you with your own memories
AI agents promise to be indispensable by remembering everything about you. But if those memories can't transfer between platforms, you're locked in. Control over memory is control over identity—and companies are writing the rules
Picture your executive assistant who knows you prefer aisle seats, remembers your social security number, and texts you podcast recommendations at the right moment. Now imagine that assistant is an AI agent—and you can't take them with you when you switch jobs. That's the memory problem Kevin Frazier and Joshua Joseph lay out in TechPolicy Press. AI agents promise to be indispensable by remembering everything: your emails, credit card statements, calendar patterns, preferences you never explicitly shared. The convenience is real, but so is the trap.
The central question is governance. Should agents remember everything unless you tell them to forget? Or should they ask permission for each memory, creating friction that defeats the whole point? Let an agent learn too much and you risk inaccurate inferences (like booking you a pub table because you went there after three long meeting days, even though you're trying to cut back on drinking). Restrict memory too tightly and you might forget to tell it about your peanut allergy.
The bigger worry is competition dynamics. If your memories can't transfer between platforms, you're locked in—just like phone numbers before portability rules. Companies know this. Microsoft bundled Teams with Office365; Apple restricted third-party payments. Bundling agents with email, calendars, and smart home devices is the obvious next move. And even if regulators mandate portability, will exported memories actually work on rival platforms? Or will companies make the transfer process painful enough that you won't bother?
The authors are right: control over memory is control over identity. Act now or watch agents script our choices for us.

