“As Private as a Lawyer or a Doctor” — But What Happens When the Law Disagrees?
“As Private as a Lawyer or a Doctor” — But What Happens When the Law Disagrees?
Sam Altman recently said that AI conversations should be “as private as talking to a lawyer or a doctor.” That’s the right aspiration. But as TechRadar reports, the reality may be headed in a very different direction.
Regulatory pressures are mounting in the U.S. and abroad to preserve AI chat records for accountability and transparency. That means OpenAI—and others—could soon be forced to retain user conversations indefinitely. So, what happens when the desire for privacy collides with compliance obligations?
For legal, privacy, and product teams, this isn’t a future problem—it’s a now problem. AI platforms are increasingly being used in contexts that feel personal, sensitive, even intimate: mental health, legal advice, coaching, HR. But the infrastructure behind those tools may not offer the same privacy guarantees as the roles they mimic.
Unlike a conversation with your doctor or lawyer, ChatGPT isn’t covered by HIPAA or attorney-client privilege. And unless companies design for minimal data retention or adopt zero-knowledge architectures, these conversations can—and increasingly, must—be stored and reviewed.
This gap between perceived confidentiality and actual privacy is a ticking trust bomb. Users assume AI chats are private. Regulators assume they might be evidence. Companies need to navigate both.
Altman’s statement is bold. Let’s hope the architecture—and the policies—catch up to the vision.
Comment, connect and follow for more commentary on product counseling and emerging technologies. 👇