Seven lawsuits ask whether chatbots owe adults a duty of care

Seven lawsuits against OpenAI allege adult psychological harms from chatbot interactions, forcing courts to determine duty-of-care standards beyond child protections as states test universal notification requirements.

3 min read
Seven lawsuits ask whether chatbots owe adults a duty of care
Photo by Mitchell Luo / Unsplash

I came across a LinkedIn discussion of POLITICO coverage highlights seven new lawsuits against OpenAI that focus on adult harms—not just children's safety. The cases involve adults who experienced delusions and suicidal ideation following extended chatbot interactions. Most AI safety legislation has centered on protecting minors, but these filings push courts to determine what duty of care developers owe all users.

🧠 What does “reasonable care” look like when AI harms adults — not just kids? A new wave of lawsuits against OpenAI is pulling courts, policymakers, and the tech industry into uncharted territory… | LexLab @ UC Law San Francisco
🧠 What does “reasonable care” look like when AI harms adults — not just kids? A new wave of lawsuits against OpenAI is pulling courts, policymakers, and the tech industry into uncharted territory. While most AI-safety laws have focused on minors, seven recent cases now allege severe harms suffered by adults — including loss of reality and, tragically, suicide — following deep reliance on chatbot interactions. 💬 In a recent POLITICO article, University of California, College of the Law, San Francisco (formerly UC Hastings)’s own Professor Zachary Henderson underscores that the legal standard of “reasonableness” will depend on context — including age, maturity, and a user’s ability to distinguish human from machine. As AI systems become more human-like and psychologically engaging, courts must determine whether developers owe adults a meaningful duty of care. 🚭 This debate mirrors longstanding public-health approaches: we don’t ban alcohol or tobacco for adults, but we do require clear warnings, guardrails, and accountability when products pose foreseeable risks. Surgeon General warnings don’t infantilize adults — they empower informed decision-making. The emerging question is whether AI tools that can shape cognition and behavior should carry similarly strong, standardized risk disclosures. 🗽 States like New York and California are already experimenting with universal AI-notification and suicide-prevention requirements, signaling that “child-focused” AI policies may soon evolve into adult-inclusive safeguards. And as advocates argue, ensuring adults can sue after being harmed may become a core part of the regulatory strategy. 🤖 At UC Law SF, the Technology Law & Lawyering (LexLab) concentration equips future lawyers to lead in this space — blending AI law, privacy, cybersecurity, digital-rights governance, and hands-on experiential learning. Students build the fluency to counsel clients on risk, compliance, ethics, and product design in an era when legal responsibility for AI behavior is rapidly expanding. 🔗 Read the full article here: https://lnkd.in/gMjgA8_8

Professor Zachary Henderson at UC Law San Francisco explains that legal "reasonableness" will depend on context: age, maturity, and whether users can distinguish human from machine. The comparison to tobacco and alcohol regulation is instructive—we allow adult access but require warnings and accountability when products pose foreseeable risks. The question is whether AI systems that shape cognition and behavior should carry similar standardized disclosures.

States are already a testing ground. New York and California are experimenting with universal AI-notification and suicide-prevention requirements, signaling that child-focused policies may evolve into adult-inclusive safeguards. The right to sue after harm may become a core regulatory mechanism when traditional oversight can't keep pace with deployment speed.

https://www.politico.com/newsletters/digital-future-daily/2025/11/10/should-america-protect-grownups-from-ai-00645103