Ken Priore

I help organizations navigate the messy, fascinating intersection where law meets cutting-edge technology — making sure innovation doesn't just move fast, but moves thoughtfully.

I work with teams building AI systems and new platforms, helping them do it legally, ethically, and sustainably. Think translator, strategist, and early warning system rolled into one.

The work sits at the crossroads of law, technology, and business strategy. I'm not just interpreting today's rules—I'm helping teams anticipate what they'll need to consider tomorrow. It's about seeing around corners and turning complex legal frameworks into practical guidance that actually helps people build better products.

What drives me is this: you don't have to choose between innovative and responsible. The best solutions happen when you bring legal thinking in early—not as a roadblock, but as part of the design process. Whether that's embedding privacy into product architecture, structuring partnerships for responsible scaling, or developing AI governance frameworks that teams actually want to follow.

Beyond the Bio

I believe that excellence and humanity aren't competing values — they reinforce each other. Setting high standards matters, but so does creating the conditions where people can actually meet them. That means building trust, recognizing that good ideas can come from anywhere, and making sure teams feel supported when they're taking smart risks.

At its core, this work is about bringing different perspectives together and finding solutions that serve both the mission and the people behind it. It's collaborative problem-solving at its best — part strategic thinking, part relationship building, and part figuring out how to make complex things feel simple.

You can reach me at ken@kenpriore.com

In Conversation

Exploring AI, law, and innovation through podcasts, panels and interviews

“AI is getting very good at acting like a lawyer. It isn’t one.”

Jessica Nguyen asked me on the In-House podcast the question many legal professionals are quietly wrestling with: is AI going to take our jobs?

My view is that AI is not replacing lawyers, yet it is reshaping nearly every role inside the legal function. I put it this way on the episode: “I’m not worried about it taking my job — I’m worried about how it will fundamentally change almost every job in the legal function.”

The conversation with Jessica and Jenny Hamilton (CLO, Exterro) circled around this idea of change. We drew comparisons to the early years of eDiscovery, when the shift from paper binders to searchable databases felt daunting. Those tools didn’t eliminate lawyers — they changed how lawyers worked, saved time, and created new ways to deliver value. The same pattern is now playing out in contracting, risk review, and other workflows with AI.

I emphasized that AI is good at acting like a lawyer, and this is precisely why the human role matters. As I said, “For folks who want to stay in the same exact spot where they are, that’s going to be really difficult.” Lawyers will need to stay in the loop to review, correct, and redirect outputs. That oversight is what prevents malpractice, protects clients, and ensures that efficiency doesn’t come at the cost of judgment.

The throughline in all of this is that adoption must be tied to real problems. Lawyers and teams have limited time, and new technology needs to show why it matters quickly. Curiosity is a good start, though durability comes from solving specific needs and making people’s work easier, not harder.

AI in legal is not a story of loss. It is a story of adjustment, new skills, and lawyers stepping into the role of translators between powerful tools and human judgment.


Harnessing AI to Elevate — Not Replace — Legal Talent

Thrilled to be back co-hosting the "In-House" podcast with Jessica Nguyen! 🎙️ We had a fantastic chat with DocuSign's legal leaders, Sandy MacDonnell and Krysta Johnson, about the future of the legal department.

We explored some fascinating themes: The Strategic Evolution of Legal Ops 🚀: Legal ops has moved beyond cost-saving to become a strategic partner that drives revenue and shapes company goals. It’s about running the legal department like a business. AI as an Efficiency Engine ✨: AI is already delivering value in everyday tasks, freeing up legal pros for higher-value, strategic work. The goal is to augment, not replace, human talent. Human-Centric Tech Adoption 🤝: Successful tech implementation hinges on solving the right problems and keeping humans in the loop. Involving stakeholders early and managing change is critical for adoption.

My take? The lines between "legal ops" and "legal" are dissolving. As I noted in our chat, operational thinking is becoming integral for everyone, powered by technology that empowers us all. Listen to the full episode for more! 🎧


Who Owns the Call? Building Clarity, Speed, and Trust in Legal Decisions

Thrilled to join Jessica Nguyen on the In-House podcast for a deep dive into “The Art of Decision Ownership.” 🎙️ We explored how legal leaders can shift from the dreaded review-and-approve mode to becoming true trusted advisors by building shared ownership of risk, embedding legal early in strategic conversations, and creating a culture of psychological safety. I spoke about how, in organizations of scale, it’s critical to get early agreement on who’s the directly responsible decision maker — without it, you risk decision paralysis. I also emphasized that it’s not about legal saying no; it’s about sharing concerns, inviting dialogue, and getting teams to think differently about the problem.

My take throughout the discussion was that decision ownership is as much about process and relationships as it is about the decisions themselves. Clarity on ownership must come early, before hard calls arise. Legal should be embedded upstream in strategy, not just brought in for final approvals. Framing issues collaboratively — focusing on “how do we solve this together” rather than “legal says no” — is essential, as is documenting the thinking behind major decisions to protect the company in regulatory conversations. Finally, fostering psychological safety ensures teams feel empowered to take informed risks and view mistakes as opportunities to learn, not moments for blame.


Structuring AI Governance for Real-World Impact

In this Practicing Law Institute program segment, I walked through the practical building blocks of an effective AI governance framework—translating abstract policy goals into actionable structures. I covered how to define an AI-specific mission, vision, and strategic plan; set measurable KPIs; and establish governing committees that blend C-suite leadership, legal, data science, IT, marketing, and other interdisciplinary voices. I emphasized the value of formal policies and guidance backed by clear internal controls, including approved-use lists for AI technologies, applications, data sets, and vendors. We explored policies governing public AI use, vendor oversight, and organizational messaging, as well as the role of checklists and approval workflows to keep governance embedded in daily operations. My approach underscored that strong governance isn’t just about risk management—it’s about enabling AI innovation within safe, transparent, and strategically aligned guardrails.


Developing an AI Use Strategy & Policy | Part 1

I kick off this series by digging into why AI policies matter now—and how to move from lofty principles to real guardrails teams can actually use. We explore the risks companies tend to overlook, the building blocks of an effective policy, and the cross-functional muscle it takes to make governance stick. My takeaway? Good AI governance isn’t about slowing things down—it’s about creating the conditions where innovation and responsibility can move forward together. 🚀


AI Use Policies: Involving the Whole Business | Part 2

For the second installment, I focus on one of the toughest challenges: making governance real across the entire organization. Policy can’t live in siloed documents or sit solely with legal—it has to be a shared framework that everyone, from engineers to marketers to leadership, can understand and apply. Real governance means creating a common language, giving teams clarity without stifling creativity, and turning principles into everyday practice. The goal isn’t to draw hard limits—it’s to build the trust and alignment that allow innovation to move fast and responsibly. 🤝


So You Have an AI Policy. Now What? | Part 3 | Briefly

In this segment with Briefly, we move past drafting and focus on the harder part—bringing an AI policy to life. Too often, policies exist only on paper, disconnected from the people making day-to-day decisions. Here we unpack how to close that gap: educating teams, reinforcing accountability, and weaving governance into daily workflows so it becomes second nature. My core message is simple—the real value of an AI policy isn’t in the words on the page, but in the way it shapes behavior and builds trust across the organization.