Onyx security raises $40M to build an AI control plane for enterprise agents

Your existing security architecture assumes humans review and approve decisions. Agentic systems break this pattern.

3 min read
Onyx security raises $40M to build an AI control plane for enterprise agents
Photo by LOGAN WEAVER | @LGNWVR / Unsplash

The moment autonomous AI agents touch real decisions, security stops being about filtering bad outputs. It becomes about controlling what actions an agent can take in the first place.

Onyx Security just raised $40 million on exactly this premise. Their platform sits between enterprise agents and the systems they interact with—discovering which agents are running, observing their reasoning in real time, and intervening when something looks wrong. You can approve actions before they execute. You can block them. You can correct them mid-stream. This is a fundamentally different security problem than the one we've been solving for language models.

For companies deploying agentic systems in production, that distinction matters enormously.

Why agents create a new class of risk

When an AI system only generates text, your risk surface is bounded. A model hallucinates, a user reads it, you catch the error before acting. The human is still in the loop making the decision. The model can generate bad information, and it can't do anything without permission.

Agents change this equation. An agent can write code, call APIs, move money, delete files, send communications, trigger workflows. The same hallucinations that produce grammatically coherent nonsense in a chatbot now produce grammatically coherent instructions to systems that execute them blindly. A prompt injection attack doesn't just poison text—it poisons the agent's reasoning, which then drives real actions.

Your compliance, risk, and security teams are understandably freaked out.

What legacy controls can't do

Your existing security architecture assumes humans review and approve decisions. Agentic systems break this pattern. An agent might execute hundreds of decisions per minute across dozens of systems. You cannot review each one. So you need to shift from 'humans approve decisions' to 'the system only makes decisions we would approve.'

This is a different security posture entirely. You're moving from detective controls—catching bad things after they happen—to preventive controls that stop bad things before execution. And you're doing it for decisions that move at machine speed.

Legacy security frameworks don't work here. Network firewalls can't inspect agent reasoning. Identity and access management assumes a human pressed a button. None of these tools watch an autonomous system making thousands of decisions and say 'wait, that one doesn't make sense—stop.'

What an AI control plane actually does

Onyx's platform has three core functions: discovery, observation, and intervention. Discovery means knowing which agents are running and what they're connected to. Observation means watching the agent's reasoning in real time. Intervention means acting on that intelligence before the agent crosses a line.

In practice, this looks like agent-native governance. The question shifts from 'what can this user do' to 'what can this agent decide to do, and under what conditions does it need human judgment.' What teams need right now

If you're deploying agents today, start with minimum viable controls. Know which agents exist and what they touch. Add observation on agents touching your highest-risk systems. Implement one intervention policy: your most critical agents require human approval for sensitive actions.

The companies that build this muscle early will scale agents with confidence. The ones that don't are going to face painful incidents and emergency governance retrofits that slow everything down.

For more insights on where AI, regulation, and the practice of law are headed next, visit www.kenpriore.com.

As AI agents spread, Onyx raises $40 m. to guard them | The Jerusalem Post
The New York and Tel Aviv-based startup said its platform is designed to help companies monitor, govern, and intervene in the actions of autonomous AI systems as adoption accelerates.