When AI moved from generating outputs to taking actions, it crossed a legal threshold
Europe has the infrastructure. What it lacks is the legal plumbing that makes agent delegation enforceable when something goes wrong.
Europe has the infrastructure. What it lacks is the legal plumbing that makes agent delegation enforceable when something goes wrong.
The wiki-as-memory pattern is one answer. Not the only one, but a structurally honest one.
Read with a highlighter. There are a lot of pages. Most of them earn their place.
Based on analysis of the EU digital identity framework and agentic AI deployment patterns. Original source: Digital Identity in the Age of AI Agents
Governance needs to be in the architecture conversation, not the incident response.
The framework trains AI agents to be right for the right reasons — not just right by coincidence. For AI governance, that distinction is everything.
The authors suggest treating AI agents as "legal actors" — entities that bear duties — without granting them legal personhood.
The accountability gap doesn't just create compliance risk. It creates operational security risk. When model developers point to deployers and deployers point to model developers, the space between them becomes the attack surface.