AI and platform engineering are merging — and that changes the developer productivity equation

AI and platform engineering are converging. For governance teams, that means the platform — not the policy doc — is where your AI guardrails actually live. The architecture matters.

4 min read
AI and platform engineering are merging — and that changes the developer productivity equation

Jennifer Riggins published a thorough piece in The New Stack that captures something I've been tracking from the legal and governance side: AI isn't replacing developer teams — it's amplifying them. And the organizations that get this right are the ones building AI adoption on top of platform engineering, not alongside it.

The convergence matters for anyone working in product counsel, AI governance, or legal operations. Here's why.

Platform engineering as the governance backbone

Platform engineering — the practice of building internal platforms that create standardized "golden paths" for developers — has quietly become the infrastructure layer that makes safe, compliant AI deployment possible. Most of the engineering leaders Riggins interviewed placed their organizations at the "operational" tier of the Cloud Native Computing Foundation's Platform Engineering Maturity Model: standardized tooling, centrally tracked by a dedicated team.

That's significant. An operational platform means there's already a spine for governance. When you layer AI coding assistants and autonomous agents on top of that, you get guardrails baked into the workflow rather than bolted on after the fact. Dave Bresci at PagerDuty described Backstage and Spotify Portal as the "lynchpin" of their platform strategy, enabling golden paths and centralizing documentation — exactly the kind of infrastructure that makes AI adoption auditable and controllable.

For product counsel, this is the frame that matters: platform engineering is where your AI governance policies actually get enforced. Not in a policy document. In the tooling.

The productivity gains are real — and they raise the compliance stakes

The numbers Riggins gathered are hard to ignore:

  • Spotify reports that AI agents have generated more than 1,500 merged pull requests, delivering 60% to 90% time savings on larger migrations. Their internal AI knowledge assistant, AiKA, cut internal support request resolution time by 47%.
  • SaaScada's CTO Paul Payne says his team has moved from writing code by hand to building specialized agents with Claude Code, compressing months of work into days and weeks.
  • Appknox allocates a separate budget for AI tooling because "the return is clear" — and as a security company, every tool gets vetted for privacy and compliance.

That last point is the one that should catch the attention of legal leaders. When AI becomes this embedded in development workflows — when 90% of developers are using AI coding tools daily, the compliance surface area expands dramatically. Code generated by AI still needs to meet your regulatory requirements. Pull requests merged by agents still carry legal risk. The speed is a gift, but only if governance keeps pace.

Agentic AI is next — and it demands a governance rethink

The article's most forward-looking section focuses on agentic AI: autonomous agents that don't just assist developers but take independent action toward goals, building on their own past experiences to influence decision-making.

Max Marcon at MongoDB compared the current state to early Open Banking — companies are focused on making sure agents behave safely, access only the right data, and operate under clear governance. João Freitas at PagerDuty pointed to the Model Context Protocol (MCP) and Agent2Agent (A2A) standards as emerging frameworks for agent deployment and monitoring.

For anyone in AI governance, this is the signal to start building your frameworks now. When agents can autonomously generate code, open pull requests, and make decisions across thousands of repositories, your governance model needs to account for:

  • Provenance tracking: Which agent made which change, using which model, under which instructions?
  • Decision auditability: Can you reconstruct why an agent took a specific action?
  • Cross-organization risk: As agents begin limited collaboration across companies, who owns liability?

Payne at SaaScada captured both the opportunity and the risk: "The best engineering teams will take calculated risks on where and how to constrain AI behavior so their software performs better with each subsequent model release." That word — constrain — is where legal and product teams earn their keep.

Developer experience is the adoption lever — not mandates

The piece closes with what I think is its most important insight for leadership: buying a tool does not equal adoption.

A Multitudes white paper found that leadership's actions are one of the most important factors in AI adoption success. Only when leaders made adoption a vocalized priority — a "clear expectation," not a mandate — did they see accelerated tooling adoption. Helen Greul at Multiverse.io put it plainly: "Technology is accelerating — people need steadiness."

Molly Clarke at easyJet emphasized something that resonates with how I think about product counsel's role: "Ask the users what they want." Gates for security and compliance should be mandatory. Everything else serving internal developers — including the platform strategy itself — should be optional. The goal is to make the right thing the easiest thing, so developers don't want to leave the golden path.

The convergence of AI and platform engineering creates a specific opportunity for product counsel and AI governance professionals:

  1. Get embedded in platform decisions. The platform is where governance lives in practice. If legal isn't at the table when golden paths are designed, you're writing policies that won't be followed.
  2. Treat AI tooling budgets as compliance investments. Appknox's approach — separate AI budget, every tool vetted for privacy and compliance — should be the baseline, not the exception.
  3. Build agentic governance frameworks before you need them. MCP and A2A are emerging standards. Legal teams that understand these protocols now will be positioned to shape policy rather than react to incidents.
  4. Align on measurement. If engineering is tracking DORA metrics and SPACE frameworks, legal should understand what those metrics capture and where compliance signals can be integrated.

The playing field is flattening, as Greul said. Small teams can do previously unthinkable things. That's exhilarating — and it means the governance architecture has to be just as scalable as the technology it governs.

In 2026, AI Is Merging With Platform Engineering. Are You Ready?
Discover what’s ahead this year for developer productivity, and why a strategic, user-centric approach is key for success.