AI agents could dissolve the friction that keeps justice expensive

AI agents could structurally reconfigure law

5 min read
AI agents could dissolve the friction that keeps justice expensive

Canceling a premium subscription that requires a certified letter. Navigating a multi-page insurance claim through shifting web portals. Fighting a credit card dispute across three different customer service channels. Modern administrative life is defined by intentional friction — what scholars call "sludge" — designed to make you surrender your rights because the cognitive and temporal cost of pursuing them is simply too high.

For decades, the division of labor was clean: humans provided the intent, machines followed commands. AI legal agents dissolve that boundary. These aren't chatbots that retrieve case law when you ask the right question. They're autonomous systems that perceive their environment, formulate strategic goals, and execute actions with minimal oversight. When OpenAI, Google, and Anthropic race to deploy agents (not to mention the OpenClaw craziness) the legal system's administrative sludge becomes a solvable engineering problem rather than an immovable structural feature.

Becher and Alarie argue this shift operates along dimensions that go well beyond efficiency. It restructures who holds legal power, where authority comes from, and when law intervenes.

Dismantling the gatekeepers

Legal expertise has been the primary casualty of Baumol's cost disease. Because legal work is labor-intensive and performed by highly paid professionals, its cost rises even as other sectors automate. Quality justice has become a luxury good, gatekept by billable hours.

AI legal agents provide a structural solution by decoupling legal expertise from human time. When systems like Luminance or Pactum automate complex contract analysis and negotiation, they transform law from a scarce professional service into a distributed resource. A tenant facing eviction doesn't need to know which statute applies or how to draft a legal filing. The agent reviews the lease, cross-references local protections, identifies procedural defects in the eviction notice, and escalates the matter — all without the tenant needing to understand the mechanics.

The access numbers are stark. Approximately 92% of low-income Americans receive no meaningful legal assistance for their civil legal problems. The counterfactual matters: the comparison for AI legal agents isn't a flawless human attorney. For most people facing legal problems, the realistic alternative is no help at all.

Killing the repeat-player advantage

Marc Galanter identified the structural asymmetry fifty years ago: institutional "repeat players" — corporations, landlords, insurers — systematically outperform individual "one-shotters." Repeat players spread costs across cases, cultivate procedural expertise, and strategically shape legal rules to serve their long-term interests. An individual tenant has one eviction. The landlord's legal team has handled hundreds.

AI agents could compress that gap by giving individuals institutional memory. A tenant's agent synthesizes insights from thousands of similar disputes, identifying patterns the individual would never see. An employee's agent detects irregular payment timing or benefit enrollment discrepancies that foreshadow employment disputes. The agent provides the strategic capacity and institutional knowledge that only well-resourced entities currently possess.

This is where the naming, blaming, and claiming framework, established by Felstiner, Abel, and Sarat, becomes concrete. Most disputes die before they reach a courtroom because the three-stage lifecycle breaks down at every step. People don't recognize they've been harmed (naming fails). They can't determine who's responsible (blaming requires expertise they lack). They abandon pursuit because the friction is too high (claiming requires resources they don't have). AI agents automate the entire chain: continuous monitoring replaces subjective realization, probabilistic legal models replace gut-feeling fault attribution, and automated filing replaces the stamina test of formal institutional engagement.

The HOA test case

Becher and Alarie ground their argument in a neighborhood dispute from Dripping Springs, Texas. A resident started giving backyard swimming lessons. The HOA objected. Lawyers got hired. The fight escalated to county officials, then state legislators. Months of friction, thousands in costs, a community torn apart.

In an agent-driven world, the parties' AI agents analyze the bylaws, homeowner rights, and local regulations — then negotiate directly. The agents find solutions that adversarial lawyers wouldn't propose because lawyers are trained for zero-sum outcomes. Language models presented with this scenario have generated integrative resolutions: reciprocal service exchanges where residents trade skills under cooperative guidelines, micro-zoning variances with sunset provisions for periodic review, community benefit partnerships that formalize the instructor's role in exchange for subsidized neighborhood access.

This connects to what Becher and Alarie call the "AI Coase Equilibrium." Ronald Coase's theorem holds that without transaction costs, parties negotiate toward efficient outcomes regardless of initial legal entitlements. Real-world transaction costs — legal fees, information asymmetries, emotional barriers, cognitive biases — prevent that from happening. A homeowner and contractor locked in a $5,000 dispute may spend $15,000 in combined legal expenses. AI agents could collapse those costs toward zero, making preventive resolution the default rather than reactive litigation.

Algorithmic constitutionalism

The most conceptually provocative argument in the paper concerns authority itself. As AI legal agents become embedded in legal processes, their programming constraints start functioning as de facto constitutional principles. The rules baked into an agent's architecture — what it can negotiate, what boundaries it respects, how it weighs competing interests — become the operative governance framework for millions of daily interactions.

The AAA-ICDR's launch of an AI-native arbitrator in September 2025 makes this concrete. An autonomous system analyzing submissions and generating awards is exercising legal authority. The values embedded in its design determine how that authority operates. Unlike external ethical guidelines, these constraints shape behavior from within.

To manage the tension between standardization and personalization, Becher and Alarie propose a three-tier spectrum. Tier 1 covers mandatory rules — tax reporting, anti-money laundering, safety standards — where agents must comply regardless of user preferences. Tier 2 handles commercial contracts, IP licensing, and financial agreements where standardization and flexibility need to coexist. Tier 3 allows extensive personalization for wills, community bylaws, and private dispute resolution where external effects are minimal.

The contested territory sits in Tier 2, and that's where product and legal teams are making governance decisions right now through the systems they build. How much negotiating latitude does a contract agent get? When does personalization create coordination failures? These are constitutional questions dressed in product requirements.

Elena's jurisdictional mesh

Consider Elena, a freelance developer in Silicon Valley caught in a cross-border dispute with a European client. Her AI agent simultaneously navigates California contract law, federal trade restrictions, EU digital market rules, and the gaming platform's terms of service. The agent synthesizes a resolution drawing from multiple legal frameworks, enforced through smart contracts and platform reputation mechanisms rather than any single court system.

This is jurisdictional synthesis — coordinating across state law, private legal systems, and platform governance to produce outcomes no single traditional system would recognize as entirely its own. Authority shifts from sovereign power (what a state says) toward functional performance (what the agent can actually resolve). Legal space stops being a map of countries and becomes a network of overlapping, AI-mediated standards that follow the user regardless of physical location.

Legal pluralism has always existed — arbitration, religious courts, industry regimes. AI agents accelerate it by operating at a scale and sophistication that traditional pluralistic systems can't match.

The design window is closing?

The trajectory toward what scholars call the "legal singularity" — law as a self-organizing, adaptive, ambient resource — is already underway. The question is whether that trajectory leads to law that's finally intelligible to the people it serves or to a two-tiered system where premium AI provides superior justice for the wealthy while basic AI leaves everyone else behind.

The digital divide frames the risk: 32.1% of the global population remains offline, and 43% of low-income US households face internet affordability challenges. Platform capitalism's track record — early empowerment yielding to consolidation, extraction, and degradation of platform quality — is the default trajectory absent deliberate intervention.

The programming constraints you embed in these agents are governance choices. The transparency requirements you adopt or skip are constitutional choices. Whether you fund open-source alternatives or build walled gardens determines whether these tools expand access or entrench new forms of inequality. None of those outcomes happen automatically.

It is design, not destiny. The window for making those design choices well is open now.

References

Based on "Legal Order in the Age of AI Agents" by Samuel I. Becher (Victoria University of Wellington) and Benjamin Alarie (University of Toronto), forthcoming in University of Toronto Law Journal (2026).

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6001277