Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Sign in Subscribe

AI

184 posts

How 2023 research predicted AI audit washing would enable discrimination

"This 2023 analysis correctly predicted that AI audit requirements would create compliance theater without meaningful bias prevention, warnings that have proven increasingly relevant as agent technologies emerge."

Article
How 2023 research predicted AI audit washing would enable discrimination

Discovery, Not Design: How AI Systems Actually Develop

The Machine Intelligence Research Institute's latest piece makes a blunt point: nobody controls what these systems become. Engineers discover behaviors after the fact rather than designing them upfront.

Signals
Discovery, Not Design: How AI Systems Actually Develop

Questions to Ask When Everyone Else Is Guessing

Every six months, the rules change. Old playbooks don't work anymore. The only way through is to ask better questions. Here are nine questions that matter for anyone building with AI

Reflections

When founders prototype with AI, product counsel must prototype with law

OpenAI's GPT-5 made "vibe coding" the new normal. When business leaders go from concept to working prototype in minutes, legal guidance needs to happen at prototype speed, not policy-document speed.

Signals
When founders prototype with AI, product counsel must prototype with law

The Perplexity problem: when AI assistants challenge web infrastructure assumptions

AI systems won't fit our old categories, and our legal frameworks haven't caught up yet.

Agents
The Perplexity problem: when AI assistants challenge web infrastructure assumptions

How systematic privacy governance becomes competitive advantage for AI deployment

EDPB guidance demonstrates how structured privacy governance approaches for LLM systems create competitive advantages while ensuring regulatory compliance.

Article
How systematic privacy governance becomes competitive advantage for AI deployment

Memory architectures that let AI agents learn from experience

The architectural shift to persistent, structured memory is happening now. Teams building these systems need to classify the memory types their agents require and define the associated governance policies upfront.

Foundations

Documenting AI risk with NIST's four functions

The NIST AI RMF shifts AI risk management from abstract principles to a structured, operational process...When AI is ubiquitous, trust matters. The RMF provides a tool for building that trust through continuous improvement.

Foundations

Sign Up for updates

Subscribe
  • Sign up
  • LinkedIN

@2025 Ken Priore

Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Subscribe Sign in