Seven lawsuits ask whether chatbots owe adults a duty of care
Seven lawsuits against OpenAI allege adult psychological harms from chatbot interactions, forcing courts to determine duty-of-care standards beyond child protections as states test universal notification requirements.
I came across a LinkedIn discussion of POLITICO coverage highlights seven new lawsuits against OpenAI that focus on adult harms—not just children's safety. The cases involve adults who experienced delusions and suicidal ideation following extended chatbot interactions. Most AI safety legislation has centered on protecting minors, but these filings push courts to determine what duty of care developers owe all users.
Professor Zachary Henderson at UC Law San Francisco explains that legal "reasonableness" will depend on context: age, maturity, and whether users can distinguish human from machine. The comparison to tobacco and alcohol regulation is instructive—we allow adult access but require warnings and accountability when products pose foreseeable risks. The question is whether AI systems that shape cognition and behavior should carry similar standardized disclosures.
States are already a testing ground. New York and California are experimenting with universal AI-notification and suicide-prevention requirements, signaling that child-focused policies may evolve into adult-inclusive safeguards. The right to sue after harm may become a core regulatory mechanism when traditional oversight can't keep pace with deployment speed.