The professional privilege gap in conversational AI
Altman's admission about ChatGPT's confidentiality problem exposes a fundamental design flaw: AI systems that encourage professional-level trust without professional-level legal protections.
I keep returning to a central tension in AI product development: the gap between what users experience and what's actually protected legally. Sam Altman's comments on Theo Von's podcast highlight this problem in a way that requires immediate attention from product teams.
Users routinely treat ChatGPT "as a therapist, a life coach" for "relationship problems" and share "the most personal" details of their lives, Altman acknowledged. These interactions feel professionally confidential while lacking any legal privilege protection. When litigation happens, those intimate conversations become discoverable evidence.
Traditional privacy frameworks assume users understand they're interacting with technology companies, not licensed professionals. When AI systems deliberately cultivate therapeutic rapport and users explicitly seek professional-style guidance, the line blurs in ways that current legal structures can't address.
OpenAI's current court battle with The New York Times, which seeks access to hundreds of millions of user conversations, demonstrates how quickly these theoretical concerns become concrete litigation realities.
Is Altman right that "we should have the same concept of privacy for your conversations with AI that we do with a therapist?" In the meantime, product teams need to address this gap through design choices that clarify the actual legal status of AI conversations, especially when those conversations venture into territory typically governed by professional licensing and privilege protections.
