Field Notes from PSR
Regulations are still uncertain. Legal and product professionals are trying to figure out what responsible actually looks like when the rules keep shifting.
Regulations are still uncertain. Legal and product professionals are trying to figure out what responsible actually looks like when the rules keep shifting.
Neon, a call-recording app that pays users to sell their audio to AI companies, went dark this week after TechCrunch discovered a security flaw that exposed every user's phone numbers, call recordings, and transcripts to anyone else using the app.
OpenAI cut off FoloToy's API access after researchers at the Public Interest Research Group found the company's AI teddy bear teaching childre…
The rise of autonomous AI agents is fundamentally expanding the attack surface for zero-click exploits, creating new and unpredictable risks.
Agentic AI demands a different approach to governance—proactive, structured, layered.
With AI regulation lagging, forward-thinking organizations can bridge the gap through robust internal governance frameworks, ensuring ethical AI development while gaining competitive advantage
Agentic AI fails due to unrealistic expectations about automation capabilities, poor use case selection, data quality problems across multiple sources, and governance gaps requiring custom solutions.
Companies seeing real returns from AI agents build measurement systems alongside the technology, treating deployment as architectural decisions rather than bolt-on solutions.