When AI privilege creates discovery blind spots for deployers
OpenAI wants ChatGPT conversations legally privileged, but traditional privilege requires professional accountability. For deployers, this means discovery blind spots—your team uses AI for strategy, you get sued, you can't access the conversations.
Altman wants ChatGPT conversations to have the same legal protections as attorney-client privilege. His pitch is about user privacy, but lets be real; OpenAI doesn't want to produce chat logs in discovery. A federal court already ordered them to preserve conversations indefinitely—even deleted ones—and this is their countermove.
So what’s the tea for the use of frontier models? If enterprise customers think their strategic planning sessions, product roadmaps, or internal deliberations conducted via ChatGPT might end up in opposing counsel's hands, adoption stalls. So Altman's trying to create a legal shield that protects both users and OpenAI from discovery obligations.
Here's where it gets complicated for deployers. Say you give your product team ChatGPT access. They use it to workshop competitive strategy, questioning legal positions. Your company gets sued, and suddenly those conversations are relevant to discovery. Normally, you'd have to produce them. But if they're privileged?
Peter Swire, who teaches privacy law at Georgia Tech, pointed to the fundamental problem: traditional privilege exists in relationships with licensed professionals who have ethical obligations, malpractice exposure, and oversight. A doctor with privilege also has a state medical board watching them. A lawyer faces bar discipline. What's the accountability infrastructure underneath ChatGPT privilege?
The developer/deployer risk allocation shifts in weird ways. OpenAI's position—echoing arguments Google's been making—is that developers have "little control" over how models get used downstream, so they shouldn't bear responsibility for misuse. Fine. But if conversations are privileged, deployers can't produce evidence of what happened even when they need to.
Swire suggested privilege might work in "limited circumstances" where there's a "clear announcement" the chatbot is acting as a doctor or lawyer. That's defensible—we already have frameworks for technology-mediated professional relationships. But blanket privilege for every ChatGPT conversation? That's not about protecting sensitive disclosures. That's about limiting discovery obligations for OpenAI and anyone who deploys their tools.
The business question isn't whether users deserve privacy. It's whether we're building privilege into AI systems without the professional responsibility infrastructure that makes privilege workable. Because once you shield these conversations from legal process, you've changed who can investigate when things go wrong—and who answers for the gaps.

