Ken Priore

I help organizations navigate the messy, fascinating intersection where law meets cutting-edge technology — making sure innovation doesn't just move fast, but moves thoughtfully.

I work with teams building AI systems and new platforms, helping them do it legally, ethically, and sustainably. Think translator, strategist, and early warning system rolled into one.

The work sits at the crossroads of law, technology, and business strategy. I'm not just interpreting today's rules—I'm helping teams anticipate what they'll need to consider tomorrow. It's about seeing around corners and turning complex legal frameworks into practical guidance that actually helps people build better products.

What drives me is this: you don't have to choose between innovative and responsible. The best solutions happen when you bring legal thinking in early—not as a roadblock, but as part of the design process. Whether that's embedding privacy into product architecture, structuring partnerships for responsible scaling, or developing AI governance frameworks that teams actually want to follow.

Beyond the Bio

I believe that excellence and humanity aren't competing values — they reinforce each other. Setting high standards matters, but so does creating the conditions where people can actually meet them. That means building trust, recognizing that good ideas can come from anywhere, and making sure teams feel supported when they're taking smart risks.

At its core, this work is about bringing different perspectives together and finding solutions that serve both the mission and the people behind it. It's collaborative problem-solving at its best — part strategic thinking, part relationship building, and part figuring out how to make complex things feel simple.

You can reach me at ken@kenpriore.com

In Conversation

Exploring AI, law, and innovation through podcasts, panels and interviews

The Convergence of AI and IP: Policy Frameworks That Actually Work

For the second year, I participated in PLI California's program on AI and intellectual property in San Francisco.

The conversation centered on the gap between having AI policies and actually implementing them. We walked through IP risks that often get missed—not just copyright infringement from training data, but ownership questions when outputs blend human and machine work, trade secret exposure when employees input sensitive information into public models, and patent implications when AI contributes to inventions without meeting the "significant human contribution" standard.

We spent time on agentic AI governance, which raises different questions than prompt-based systems. When AI acts autonomously rather than responding to inputs, the decision points shift. Where do humans intervene? How do you map data provenance when agents collect and use information dynamically? What standards apply when multiple AI systems hand tasks between each other?

The session reinforced something I see across organizations: governance frameworks work when they're built into existing structures rather than layered on top. IP committees, invention disclosure processes, vendor approval workflows—these already exist. The question is how to extend them to handle AI-specific risks without creating parallel bureaucracy that teams ignore.

My takeaway: companies don't need more policies on paper. They need governance that maps to actual decisions teams make—which models to approve, which use cases to greenlight, which data sets carry acceptable risk. That's where IP protection and innovation can move forward together.

María José Cordero-Eleven Labs, Christopher A. Suarez- Steptoe, Ken Priore, Scout Moran Superhuman at PLI San Francisco November 2025

AI Governance as a Driver of Innovation

In October 2025, I presented "AI Governance as a Driver of Innovation Amidst Regulatory Flux" at IAPP Privacy. Security. Risk. in San Diego alongside Bret Cohen (Partner, Hogan Lovells) and Saima Fancy (Data Governance Product Manager, Adobe).

The session focused on a practical challenge: how to build AI accountability structures that match the speed at which teams are shipping. We presented a three-pillar framework treating trust as infrastructure rather than aspiration—establishing cross-functional governance with real authority, extending existing privacy and security controls instead of building parallel structures, and measuring trust impact alongside technical risk.

The conference reinforced a pattern I've been tracking: the EU AI Act, Colorado's AI law, and California's CCPA regulations are converging on the same operational demands. Companies must risk assess, test, and monitor high-risk systems. They must explain why systems reach certain results. They must demonstrate that someone's actually responsible for what their AI does.

The most valuable conversations happened in the hallways—practitioners admitting what's actually broken in their governance frameworks and sharing what's working. Those insights shaped a series of posts I published afterward on California's privacy infrastructure, the developer-deployer divide, making governance frameworks teams actually want to use, and how privacy principles struggle when applied to autonomous agents.

Saima Fancy, Bret Cohen and Ken Priore at IAPP PSR October 2025

We're not just regulating tools anymore—we're governing systems that make autonomous decisions.

In October 2025, I joined a panel at the American Bar Association's Artificial Intelligence and Robotics National Institute titled "In-House Insights: Managing AI Challenges and Change." Cynthia Cwik (JAMS) moderated a conversation with Matt Samuels (Anthropic), Roy Wang (Eightfold AI), Belinda Luu (Kaiser Permanente), and me that cut straight to what in-house teams are wrestling with right now.

Cynthia's opening question—what we really want from outside counsel—set the tone for a conversation that moved past platitudes. Matt and I explored how AI agents are creating an entirely new surface for governance. The shift is fundamental: we're moving from systems that assist to systems that decide. Roy brought us back to fundamentals: build what customers actually need, not just what the technology makes possible. And Belinda and I kept circling back to the same principle—trust isn't a buzzword, it's the foundation for any meaningful AI guidance.

The through-line across all of it? The best AI solutions emerge when legal thinking isn't bolted on at the end, but woven in from the start. That's where innovation and responsibility actually meet.

Cynthia Cwik (JAMS), Ken Priore, Belinda Luu(Kaiser Permanente), Matt Samuels (Anthropic), Roy Wang (Eightfold AI) at American Bar Association's Artificial Intelligence and Robotics National Institute October 2025

Where AI Autonomy Meets Legal Reality

I recently spent a day at Stanford Law School's program on agentic AI, exploring one of the most pressing questions facing organizations today: how do you deploy AI systems that make independent decisions when your legal frameworks assume humans are always in control?

The room was filled with legal and product leaders from major corporations, all grappling with the same challenge. Their engineering teams are building agents that can interpret goals, plan multi-step actions, and execute transactions autonomously. But their organizational structures, contracts, and risk frameworks weren't designed for software that operates independently.

What struck me most was the gap between technical capability and deployment readiness. Companies have impressive demonstrations of autonomous AI, but they're stuck in pilot phases because they haven't solved the fundamental attribution problem: when an agent makes a decision you didn't directly program, who's accountable for the outcome?

This is exactly the kind of challenge I work on—helping teams build AI systems that are both innovative and responsibly deployed. The discussions at Stanford reinforced what I see in my practice: the organizations succeeding with agentic AI aren't just solving technical problems. They're building new frameworks for governance, accountability, and risk management that can adapt as AI capabilities evolve.

The technology is moving faster than the legal and organizational frameworks designed to govern it. My work sits right at that intersection—translating complex technical capabilities into practical legal and business strategies that teams can actually implement.


“AI is getting very good at acting like a lawyer. It isn’t one.”

Jessica Nguyen asked me on the In-House podcast the question many legal professionals are quietly wrestling with: is AI going to take our jobs?

My view is that AI is not replacing lawyers, yet it is reshaping nearly every role inside the legal function. I put it this way on the episode: “I’m not worried about it taking my job — I’m worried about how it will fundamentally change almost every job in the legal function.”

The conversation with Jessica and Jenny Hamilton (CLO, Exterro) circled around this idea of change. We drew comparisons to the early years of eDiscovery, when the shift from paper binders to searchable databases felt daunting. Those tools didn’t eliminate lawyers — they changed how lawyers worked, saved time, and created new ways to deliver value. The same pattern is now playing out in contracting, risk review, and other workflows with AI.

I emphasized that AI is good at acting like a lawyer, and this is precisely why the human role matters. As I said, “For folks who want to stay in the same exact spot where they are, that’s going to be really difficult.” Lawyers will need to stay in the loop to review, correct, and redirect outputs. That oversight is what prevents malpractice, protects clients, and ensures that efficiency doesn’t come at the cost of judgment.

The throughline in all of this is that adoption must be tied to real problems. Lawyers and teams have limited time, and new technology needs to show why it matters quickly. Curiosity is a good start, though durability comes from solving specific needs and making people’s work easier, not harder.

AI in legal is not a story of loss. It is a story of adjustment, new skills, and lawyers stepping into the role of translators between powerful tools and human judgment.


Harnessing AI to Elevate — Not Replace — Legal Talent

Thrilled to be back co-hosting the "In-House" podcast with Jessica Nguyen! 🎙️ We had a fantastic chat with DocuSign's legal leaders, Sandy MacDonnell and Krysta Johnson, about the future of the legal department.

We explored some fascinating themes: The Strategic Evolution of Legal Ops 🚀: Legal ops has moved beyond cost-saving to become a strategic partner that drives revenue and shapes company goals. It’s about running the legal department like a business. AI as an Efficiency Engine ✨: AI is already delivering value in everyday tasks, freeing up legal pros for higher-value, strategic work. The goal is to augment, not replace, human talent. Human-Centric Tech Adoption 🤝: Successful tech implementation hinges on solving the right problems and keeping humans in the loop. Involving stakeholders early and managing change is critical for adoption.

My take? The lines between "legal ops" and "legal" are dissolving. As I noted in our chat, operational thinking is becoming integral for everyone, powered by technology that empowers us all. Listen to the full episode for more! 🎧


Who Owns the Call? Building Clarity, Speed, and Trust in Legal Decisions

Thrilled to join Jessica Nguyen on the In-House podcast for a deep dive into “The Art of Decision Ownership.” 🎙️ We explored how legal leaders can shift from the dreaded review-and-approve mode to becoming true trusted advisors by building shared ownership of risk, embedding legal early in strategic conversations, and creating a culture of psychological safety. I spoke about how, in organizations of scale, it’s critical to get early agreement on who’s the directly responsible decision maker — without it, you risk decision paralysis. I also emphasized that it’s not about legal saying no; it’s about sharing concerns, inviting dialogue, and getting teams to think differently about the problem.

My take throughout the discussion was that decision ownership is as much about process and relationships as it is about the decisions themselves. Clarity on ownership must come early, before hard calls arise. Legal should be embedded upstream in strategy, not just brought in for final approvals. Framing issues collaboratively — focusing on “how do we solve this together” rather than “legal says no” — is essential, as is documenting the thinking behind major decisions to protect the company in regulatory conversations. Finally, fostering psychological safety ensures teams feel empowered to take informed risks and view mistakes as opportunities to learn, not moments for blame.


Structuring AI Governance for Real-World Impact

In this Practicing Law Institute program segment, I walked through the practical building blocks of an effective AI governance framework—translating abstract policy goals into actionable structures. I covered how to define an AI-specific mission, vision, and strategic plan; set measurable KPIs; and establish governing committees that blend C-suite leadership, legal, data science, IT, marketing, and other interdisciplinary voices. I emphasized the value of formal policies and guidance backed by clear internal controls, including approved-use lists for AI technologies, applications, data sets, and vendors. We explored policies governing public AI use, vendor oversight, and organizational messaging, as well as the role of checklists and approval workflows to keep governance embedded in daily operations. My approach underscored that strong governance isn’t just about risk management—it’s about enabling AI innovation within safe, transparent, and strategically aligned guardrails.


Developing an AI Use Strategy & Policy | Part 1

I kick off this series by digging into why AI policies matter now—and how to move from lofty principles to real guardrails teams can actually use. We explore the risks companies tend to overlook, the building blocks of an effective policy, and the cross-functional muscle it takes to make governance stick. My takeaway? Good AI governance isn’t about slowing things down—it’s about creating the conditions where innovation and responsibility can move forward together. 🚀


AI Use Policies: Involving the Whole Business | Part 2

For the second installment, I focus on one of the toughest challenges: making governance real across the entire organization. Policy can’t live in siloed documents or sit solely with legal—it has to be a shared framework that everyone, from engineers to marketers to leadership, can understand and apply. Real governance means creating a common language, giving teams clarity without stifling creativity, and turning principles into everyday practice. The goal isn’t to draw hard limits—it’s to build the trust and alignment that allow innovation to move fast and responsibly. 🤝


So You Have an AI Policy. Now What? | Part 3 | Briefly

In this segment with Briefly, we move past drafting and focus on the harder part—bringing an AI policy to life. Too often, policies exist only on paper, disconnected from the people making day-to-day decisions. Here we unpack how to close that gap: educating teams, reinforcing accountability, and weaving governance into daily workflows so it becomes second nature. My core message is simple—the real value of an AI policy isn’t in the words on the page, but in the way it shapes behavior and builds trust across the organization.


Tulane Law Lecturer - Privacy- New Orleans- Sept 2022

In September 2022, I returned to Tulane Law School as a guest lecturer for Amy Gajda's privacy law concentration. Amy Gajda held the Class of 1937 Professorship, and her course gives students a practical grounding in privacy that goes beyond theory.

What I talked about that day was how my career—from financial services through venture capital, mobile dating, and enterprise applications—kept circling back to the same questions. How do you build technology that people can actually trust? What does privacy mean when the systems get more complex but the people using them stay the same?

Those aren't abstract questions for me. In mobile dating, you're handling some of the most sensitive data people share. In enterprise software, you're building tools that touch millions of users who never consented to be in your system. Each shift taught me something about where privacy protections break down and what it takes to build them in from the start.

Teaching at Tulane crystallized something I'd been learning across those different contexts: privacy isn't just a compliance checkbox. It's about understanding how systems actually work, what can go wrong, and what you need to build to prevent that. First principles thinking means asking "why does this rule exist?" before you decide how to apply it.

That's what I bring to teams building AI systems now. Not just knowledge of what the regulations say, but an understanding of why they exist and how to translate that into architecture that protects people without grinding development to a halt.