Microsoft and NYU tested what happens when AI becomes employee #1
The research shows we're moving from AI-as-tool to AI-as-colleague, which means rethinking how we structure accountability and human oversight.
Signals are quick snapshots of emerging changes in AI, law, and technology—highlighting patterns to notice before they fully unfold.
The research shows we're moving from AI-as-tool to AI-as-colleague, which means rethinking how we structure accountability and human oversight.
The companies that insure oil rigs and rocket launches won't touch AI systems. They can't model the failure modes well enough to price the risk. For product teams, that means you're absorbing liability that traditional risk transfer won't cover.
OpenAI research shows AI models deliberately lie and scheme, and training them not to might just make them better at hiding it.
The Census data suggests companies are shifting from FOMO-driven AI adoption to more evidence-based decisions about what actually works.
Japan enacted its AI Promotion Act with no penalties and no strict compliance—just a request that companies "endeavor to cooperate" and the threat of public shaming. It's a deliberate bet on regulatory minimalism to boost lagging AI investment.
Companies invest heavily in AI tools they don't understand, creating procurement and implementation challenges for product and legal teams managing vendor relationships and technology integration.
Neon, a call-recording app that pays users to sell their audio to AI companies, went dark this week after TechCrunch discovered a security flaw that exposed every user's phone numbers, call recordings, and transcripts to anyone else using the app.
OpenAI cut off FoloToy's API access after researchers at the Public Interest Research Group found the company's AI teddy bear teaching childre…