LLMs Aren’t Mirrors — They’re Holograms

LLMs Aren’t Mirrors — They’re Holograms

1 min read
LLMs Aren’t Mirrors — They’re Holograms
Photo by Martin Martz / Unsplash

There’s a compelling piece in Psychology Today making the rounds that legal, product, and AI teams really need to sit with. It argues that Large Language Models (LLMs) like ChatGPT or Claude aren’t mirrors reflecting society. They’re holograms — rich, context-sensitive projections that feel real, but aren’t.

At first glance, that might sound like wordplay. But it’s a critical distinction.

A mirror passively reflects. It shows you what is. A hologram? It creates the illusion of something coherent and whole — even when the underlying structure is fragmented or incomplete. And that’s what LLMs are doing. They don’t reflect back a neutral or fixed reality. They generate plausible realities — shaped by prompts, patterns, and probabilities — that feel real enough to act on.

That’s both powerful and risky.

In the legal and compliance context, treating AI like a mirror leads us to over-trust outputs. We think: “It’s just reflecting the training data.” But that’s not how these models work. They’re remixing, recontextualizing, and persuading. When they generate legal-sounding advice, psychological support, or even policy commentary, it feels coherent — even authoritative. But coherence isn’t accuracy. And fluency isn’t truth.

That’s where the real danger lies. Not in what LLMs “know,” but in how convincingly they present their projections.

If we approach LLMs as holograms, not mirrors, our frameworks must evolve. That means:

🔹 Layered validation systems

🔹 More nuanced UX design to signal uncertainty

🔹 Policy structures that manage perception risk, not just output risk

🔹 Cross-functional teams that understand both the technology and the human cognitive biases it plays to

Holograms can be beautiful, useful, even transformative. But you wouldn’t step through one thinking it’s a doorway.

Let’s build legal and AI systems that reflect that reality.

Full article: https://www.psychologytoday.com/us/blog/the-digital-self/202505/llms-arent-mirrors-theyre-holograms

Comment, connect and follow for more commentary on product counseling and emerging technologies. 👇