The feature you refuse to ship says more than the one you launched
Legal AI vendors should publish what they refuse to build, not just what they ship. Architectural constraints aren't limitations — they're competitive differentiators. The first privilege breach will prove who got this right.
Ryan McDonough published something that really got me thinking. His core argument: companies building legal AI should publish structured disclosures about what their products cannot do and which capabilities they refuse to ship because the risks can't be controlled.
Not buried in documentation. Not hidden in support tickets. Published transparently as part of the product positioning.
The idea traces back to something surprisingly practical. Thinkst Canary, a security company, publishes a /security page that explains what their product can safely do, what it cannot do, and the features they refuse to ship because they cannot make them safe. It's straightforward, oddly rare in legal tech, and probably one of the most practical ideas the industry could adopt.
We spend a lot of time talking about responsible AI, but very few legal tech products describe their design choices in a way that actually helps buyers understand risk. Instead we default to the usual assurances: encrypted, private, compliant. All of which matters, but none of which tells you whether an AI tool behaves safely inside a matter-driven environment where privilege and client separation are the foundations of trust.
The real risks sit underneath the features
When you look at the issues that genuinely worry lawyers, they rarely come from surface-level capabilities. They appear underneath. A retrieval step that drifts across matters. A connector that reaches further than intended. An agent granted write access that nobody fully controls. A model that pulls context from a folder it should not even be able to see.
These risks don't come from the presence of AI. They come from design decisions that were never made explicit. A guardrail or a policy document cannot compensate for an architecture that allows behavior the vendor never intended. This is why most procurement checklists, however well written, are mismatched to the actual failure modes of AI-driven products.
Legal AI is moving quickly, but the way we evaluate these products still belongs to an earlier era. The tools have changed. The questions have not.
McDonough's framework: six components of a duty of care
McDonough's proposed structure includes six components that, taken together, form something closer to an architectural disclosure than a compliance checklist:
- Scope of access — A clear account of what the system can read, what it can write, and which areas it never touches.
- Boundary enforcement — A description of how matter separation is maintained in practice, including the mechanisms that prevent cross-contamination rather than assuming the model will behave.
- Behavior under failure — An explanation of how the system responds when the model produces something unusual and how this is contained so it never reaches privileged content.
- Breach containment — A realistic assessment of the consequences of a vendor-side compromise, with attention to the parts of the design that limit the blast radius.
- Intentional omissions — A list of the capabilities the vendor has chosen not to build because the risks cannot yet be controlled.
- Evidence of validation — A description of how boundaries are tested, how releases are verified, and how clients can gain confidence that the assurances hold over time.
The intentional omissions section is what caught my attention. That's the one that separates vendors who have actually interrogated their own architecture from those who are shipping first and figuring out boundaries later.
Why architectural constraints are competitive differentiators
When I led product counsel at Box, the teams that moved fastest weren't the ones with the longest feature lists. They made clear architectural decisions early about what they wouldn't build. Not "we'll get to that later" — actual design constraints they committed to maintaining.
That same principle applies to legal AI. The hard part isn't deciding to build retrieval across matter boundaries or autonomous contract rewriting. The hard part is deciding not to build those capabilities even when you technically could, because you can't guarantee the boundaries that prevent privilege contamination.
Most product roadmaps treat constraints as temporary limitations to overcome. McDonough's framework treats them as competitive differentiators. Your architecture says "we designed this system so certain failures are impossible" — not "we'll handle those failures when they happen."
I've seen this play out in privacy infrastructure. At Atlassian, we scaled to 5,000+ marketplace partners not by building more features but by solving their compliance problems through architecture. Privacy-by-design wasn't a constraint on what we could build. It was the reason partners chose our platform.
What this looks like in practice
Here's a concrete example. Your product team proposes building cross-matter semantic search to improve relevance. The question isn't "can we build this safely?" It's "can we prove to customers that matter boundaries are enforced at the storage layer, not just at the query layer?"
If the answer is no, you document that in your public disclosure: "We do not support cross-matter retrieval because we cannot guarantee privilege separation when embeddings are created from mixed corpuses."
That statement does two things. It tells customers exactly what your architecture prevents. And it tells your engineering team that relaxing that constraint later requires solving a hard architectural problem, not just shipping a feature flag.
McDonough illustrates this with a fictional vendor example — "LexLexLex AI" — where the duty of care disclosure reads like an engineering spec translated for legal buyers. Every request generates a short-lived workspace limited to a specific matter. All retrieval paths are checked at the storage layer. If a document doesn't belong to that matter, the system cannot see it. Autonomous contract rewriting and cross-matter embedding are listed as intentional omissions because their boundaries cannot be guaranteed.
That level of clarity helps firms understand whether a product aligns with their risk appetite. It also creates a culture where vendors show their reasoning rather than hide behind broad assurances.
Buyers need better questions, not longer checklists
One of the reasons legal AI procurement conversations stall is that people rely on familiar compliance questions. They're tidy, predictable, and easy to check off, but they don't reveal how an AI system actually behaves. The next generation of procurement needs to probe the underlying design rather than the surface guarantees.
The more useful questions sound like this:
- Which risks remain inherent in your current design, and how are they contained?
- How do you enforce matter boundaries at the storage layer?
- Which capabilities have you intentionally avoided because they cannot yet be delivered safely?
- How do you test for context leakage or boundary drift, and what blocks a release?
- If your environment were compromised, which client assets would be accessible and why?
- What prevents your system from modifying or overwriting live documents?
- How do you ensure new features or updated prompts don't widen the blast radius?
- What evidence do you provide to show these assurances hold over time?
These questions shift the evaluation from sentiment to structure. They push vendors to explain their reasoning instead of offering recycled phrases.
The fork in the road
The companies building legal AI right now face a choice. You can compete on feature velocity — ship everything customers ask for and figure out the boundaries later. Or you can compete on architectural guarantees — ship a smaller surface area but make explicit promises about what your system will never do.
Legal AI doesn't always need deeper autonomy. Retrieval-only tools, structured extraction, local inference for sensitive content, and clear matter segmentation often provide more predictable value than an agent trying to interpret context across entire repositories. There is always a temptation to build the next clever layer. The responsible choice is to expand capability only when you can guarantee the boundary that protects it.
Product teams that publish their design constraints early will capture the customers who care about operational safety. Teams that avoid transparency will capture customers who prioritize feature breadth. Both markets exist. But only one of them survives the first major privilege breach.
McDonough calls this a duty of care. I'd call it product discipline. The best legal AI companies will be the ones willing to say "we won't ship this" in public.
This is issue-spotting and strategic analysis, not legal advice for your specific situation.
Source: Ryan McDonough

