The organizational move that made AI safety a competitive weapon
Anthropic's decision to place red team work under policy rather than engineering reveals how organizational structure can turn safety research into both technical protection and regulatory positioning.
When Anthropic placed its Frontier Red Team under policy instead of engineering, the decision shaped everything that followed. Most AI companies bury their red teams in technical divisions, focused on finding bugs and patching problems. Anthropic took a different path—safety research that gets published, not just problems that get fixed.
Logan Graham's 15-person team launched their own "Red" blog last month, publishing everything from Department of Energy nuclear studies to Claude running a vending machine business. When Keane Lucas took the stage at DEF CON to demonstrate Claude's hacking wins—and its funny failures, like inventing fake "flags" when overwhelmed—the team was doing more than sharing research. They were showing the work publicly.
Jack Clark's calculation is straightforward. Safety work that stays internal might protect your product, but safety work that gets published protects your relationships in Washington. When Graham's team writes up Claude's concerning capabilities, they're establishing Anthropic as the company that finds dangerous AI behavior before regulators do.
For legal and product teams at other AI companies, the question isn't whether this approach is authentic or calculated. It's whether putting safety evaluation under policy creates better conditions for the work that needs to happen—and the relationships you need to maintain while doing it.

