Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Sign in Subscribe

When engineers define "AI-ready," legal inherits the error

Encoding contracts is a legal choice

Reflections
When engineers define "AI-ready," legal inherits the error

Agent Governance Toolkit: what it is and why runtime enforcement is the missing layer

The design argument is straightforward: pre-deployment testing evaluates agent behavior against test cases

Signals
Agent Governance Toolkit: what it is and why runtime enforcement is the missing layer

The Governance Gap: What Harvey Misses

Autonomous agents are changing legal

Reflections
The Governance Gap: What Harvey Misses

Death by Dialogue: The Case for Killing the Legal Chatbot

Legal work needs density, not dialogue.

AI
Death by Dialogue: The Case for Killing the Legal Chatbot

When AI Acts Before You Decide

The Fiduciary Illusion

AI
When AI Acts Before You Decide

Compliance used to mean documentation. For AI agents, it means something else entirely

Governance needs to be in the architecture conversation, not the incident response.

Agents
Compliance used to mean documentation. For AI agents, it means something else entirely

Microsoft's Argos and the verification layer AI agents actually need

The framework trains AI agents to be right for the right reasons — not just right by coincidence. For AI governance, that distinction is everything.

Foundations
Microsoft's Argos and the verification layer AI agents actually need

AI agents fail because they don't know what "customer" means at your company

AI agents fail because nobody defined what "customer" means in your business. Ontology infrastructure provides semantic guardrails that technical controls alone can't deliver.

AI
AI agents fail because they don't know what "customer" means at your company

Sign Up for updates

Subscribe
  • Sign up
  • LinkedIN

@2025 Ken Priore

Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Subscribe Sign in