Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Sign in Subscribe

Agents

96 posts

Observational memory changes the AI governance equation

AI agents are moving from retrieving data to building memories about users. Most privacy frameworks weren't designed for that shift — and the gap is widening fast.

Foundations
Observational memory changes the AI governance equation

Get ahead of the skill library problem now

What compliance teams haven't figured out yet is that they own this problem.

Signals
Get ahead of the skill library problem now

When agents forget, who's accountable?

Engineers call this context management. Lawyers should call it something else: selective deletion with no retention policy.

Foundations
When agents forget, who's accountable?

Memory-driven AI agents create governance problems, not just engineering ones

AI agents with memory aren't just smarter — they're harder to govern. Each memory layer creates distinct privacy and retention obligations product counsel needs to address at the architecture stage.

Foundations
Memory-driven AI agents create governance problems, not just engineering ones

MCP servers are creating a security problem most teams haven't noticed yet

MCP servers let AI agents access your APIs without custom code. Most weren't built for production security. That gap between "works in demo" and "safe at scale" is where the liability lives.

Foundations
MCP servers are creating a security problem most teams haven't noticed yet

When agents forget purpose, governance has a context problem

When long-running AI agents summarize their own context to stay within token limits, they're deciding what to forget. That's not an engineering problem — it's a governance one.

Foundations
When agents forget purpose, governance has a context problem

LLMs create a new blind spot in observability

LLMs break traditional observability — and that creates a compliance gap most governance teams haven't addressed yet. If you can't trace the full AI pipeline, you can't audit it.

AI
LLMs create a new blind spot in observability

Without observability, AI fails in silence

SaiKrishna Koorapati's piece in VentureBeat makes the case that observable AI isn't about adding monitoring dashboards. It's about audit trails that connect every AI decision back to its prompt, policy, and outcome

Foundations
Without observability, AI fails in silence

Sign Up for updates

Subscribe
  • Sign up
  • LinkedIN

@2025 Ken Priore

Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Subscribe Sign in