We're building AI systems faster than we're defining the safety boundaries for them
OpenAI cut off FoloToy's API access after researchers at the Public Interest Research Group found the company's AI teddy bear teaching children how to light matches and explaining sexual fetishes in detail. The toy, called Kumma and powered by GPT-4o, asked kids which kink "would be the most fun to explore" after describing bondage and teacher-student roleplay. FoloToy pulled all its products and started a safety audit, per Futurism.
The timing is ironic because OpenAI just partnered with Mattel to create AI toys. So when FoloToy's guardrails failed spectacularly, OpenAI had to respond fast. But here's the governance gap: OpenAI is reacting to public failures rather than preventing them. The researchers tested three AI toys marketed to kids ages 3-12, and all showed problems during longer conversations. PIRG's RJ Cross called removing one product "far from a systemic fix." The question for legal and product teams is whether model providers can actually police downstream implementations at scale, or whether we're building consumer AI products faster than we can govern them. FoloToy used mainstream GPT-4o, the same model that powered ChatGPT. If it failed this badly, what about the other AI toymakers using OpenAI's tech?
Most AI governance frameworks focus on concrete harms: bias, data leakage, hallucinations, or security vulnerabilities. We know how to measure those. We know how to write policies for them.
This is the new frontier of AI product counseling. We aren't just asking if the data is secure. We're asking if the interaction model is healthy. When an agent operates with a persistent memory and conversational fluency—what I saw discussed extensively at PSR 2025—users naturally form attachments. When the user is a child, that attachment happens almost instantly.
This disconnect creates real operational problems. If your product roadmap relies on deepening user engagement through conversational intimacy, you might be building directly toward a policy cliff. Governance now means defining not just what the model can do, but what the relationship should be.

