Ghost in the Machine

AI governance in the GPT-5 System Prompt

1 min read
Ghost in the Machine
Photo by Erik Müller / Unsplash

The GPT-5 system prompt leak offers the most detailed insight yet into how AI governance functions in practice. Instead of simply instructing their model to "be safe," OpenAI implemented specific mechanisms: a recency scoring system that prompts web searches for high-stakes queries and a memory system that blocks storing your criminal record but allows exceptions if explicitly requested. These are concrete product decisions embedded in thousands of words of hidden instructions, not just abstract safety principles. When Forbes' John Koetsier published the leaked prompt, I was struck by the level of detail: GPT-5 is instructed to verify multiple sources for financial advice, never reproduce copyrighted lyrics, and avoid remembering details that might seem invasive. This exemplifies system design as safety policy, meaning legal and product teams must consider prompts as the new layer of compliance. Traditional AI ethics frameworks overlook where real control exists—in those unseen instructions that guide every response. For AI product builders, the GPT-5 leak highlights that safety strategies depend on system prompts rather than privacy policies. These prompts directly influence the user's experience.

https://www.forbes.com/sites/johnkoetsier/2025/08/09/gpt-5s-system-prompt-just-leaked-heres-what-we-learned/