Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Sign in Subscribe

AI

236 posts

LLMs create a new blind spot in observability

LLMs break traditional observability — and that creates a compliance gap most governance teams haven't addressed yet. If you can't trace the full AI pipeline, you can't audit it.

AI
LLMs create a new blind spot in observability

Treat Your AI Agent Like a Junior Associate—Not Like Magic

You wouldn't tell a first-year associate "do law" and expect good results. So why are attorneys doing exactly that with AI agents? Dan…

AI
Treat Your AI Agent Like a Junior Associate—Not Like Magic

AI reasoning explanations fail four times in five: what to verify before shipping

The trajectory is encouraging — the most capable models performed best. But 20 percent is not a foundation for compliance frameworks.

AI
AI reasoning explanations fail four times in five: what to verify before shipping

Without observability, AI fails in silence

SaiKrishna Koorapati's piece in VentureBeat makes the case that observable AI isn't about adding monitoring dashboards. It's about audit trails that connect every AI decision back to its prompt, policy, and outcome

Foundations
Without observability, AI fails in silence

The accountability gap just became a security gap

The accountability gap doesn't just create compliance risk. It creates operational security risk. When model developers point to deployers and deployers point to model developers, the space between them becomes the attack surface.

Foundations
The accountability gap just became a security gap

Linux Foundation governance gives AI agents the trust infrastructure they need

Before MCP, every AI application needed custom connectors for each data source. Without foundation governance, that success creates three risks: proprietary lock-in, protocol fragmentation, or de facto control by a single company. AAIF prevents all three

Signals
Linux Foundation governance gives AI agents the trust infrastructure they need

When Robots Negotiate: How Human Tactics Shape AI Deals

Advancing AI Negotiations: New Theory and Evidence from a Large-Scale Autonomous Negotiations Competition Authors: Michelle Vaccaro, Michael Caoson, Harang Ju, Sinan Aral, and Jared R. Curhan

Foundations
When Robots Negotiate: How Human Tactics Shape AI Deals

AI agents fail in production because teams skip the boring parts

You can't eliminate non-determinism in LLMs, and you shouldn't try. The goal is management, not elimination.

Foundations
AI agents fail in production because teams skip the boring parts

Sign Up for updates

Subscribe
  • Sign up
  • LinkedIN

@2025 Ken Priore

Ken Priore
  • Home
  • About
  • Signals
  • Reflections
  • Foundations
Subscribe Sign in