When good AI policies fail: why employee resistance beats perfect frameworks

Product counsel has focused on policy design, but Fast Company research reveals that 31% of employees actively sabotage AI strategies. The real challenge is navigating identity crises.

1 min read
When good AI policies fail: why employee resistance beats perfect frameworks
Photo by CHUTTERSNAP / Unsplash

I've spent months arguing that proactive AI policies are effective when everyone is on the same page and working together. The Fast Company research makes me rethink whether I've been solving the wrong problem. 31% of employees admit to "sabotaging" their company's AI strategy by refusing to adopt AI tools, and the real resistance comes from uncertainty over who they are and who they'll become.

Product counsel typically focuses on getting policy language right—clear boundaries, who to contact when things go wrong, and approaches to managing risk without stifling innovation. What I'm seeing is that resistance operates at the level of identity, not compliance. The "competence penalty" refers to the perception that AI users are seen as less competent by their peers, with women and older workers being disproportionately affected. So your carefully built policy becomes irrelevant when employees fear that using AI makes them look incompetent.

This means lawyers need to think more like organizational psychologists. 39% of Gen Z workers automated tasks without manager approval while 47% used AI inappropriately and 63% saw others do the same. When policies exist but adoption fails, the problem isn't how we structure those policies—it's that we designed for people making logical choices in a situation that's fundamentally emotional. Product counsel needs to move beyond policy design to become partners in helping organizations navigate the identity shifts that AI adoption actually requires.

https://www.fastcompany.com/91382852/why-your-employees-are-actively-resisting-ai-in-the-workplace-and-what-to-do-about-it