Understanding AI as a "Normal" Technology
Headlines proclaim that AI will change everything overnight—jobs, society, the whole works. The technology is powerful, no question. But some experts want us to slow down and look past the hype.
This article draws on work by Princeton Professor Arvind Narayanan and researcher Sayash Kapoor, whose "AI as Normal Technology" framework reframes AI as a technology that develops through gradual social, institutional, and regulatory adaptation—like electricity or the internet. It also incorporates insights from Stacey Gray at the Future of Privacy Forum on adoption dynamics and cognitive offloading. The goal is to translate their academic and policy work into accessible guidance for practitioners and policymakers.
Princeton Professor Arvind Narayanan argues we should treat AI as a "normal technology." This doesn't diminish its potential. It grounds it in reality. The real-world adoption of AI is slower, messier, and more human-driven than most people think. Understanding that matters for smarter policy.
What "AI" Actually Means
The term "AI" is misleading because it sounds like one thing. It's not. It's a toolbox with different instruments for different jobs.
Not One Technology—A Collection
Narayanan explains that AI is "a collection of loosely related technologies." Treating everything from a movie recommender to a medical diagnostic tool as the same thing obscures how they work and what they do.
Two Types Worth Distinguishing
The AI universe splits into two camps: Predictive AI and Generative AI. Predictive AI has been around for years, informing high-stakes decisions. Generative AI is newer, and Narayanan admits it's "very useful to virtually every knowledge worker."
The "Free Buzzsaw" Problem
Narayanan criticizes how generative AI was released to the public. His metaphor: "It's as if everyone in the world has been simultaneously given the equivalent of a free buzzsaw." This captures his view that the release model failed to build in guardrails from the start. Universal access won without targeted, safe applications for specific industries.
The Four-Stage Journey from Invention to Impact
The path from new invention to society-changing force is long and winding. Narayanan adapts a classic framework from "diffusion of innovations" theory to explain why AI's real impact takes much longer than headlines suggest.
Four Phases of Change
Real-world technological change happens in stages:
- Invention: Models and their capabilities improve. This is technical breakthroughs—AI models get better at passing benchmarks and performing tasks in a lab.
- Innovation: Capabilities translate into actual products. A company takes a powerful AI model and builds it into something useful, like an AI-powered image editor or a legal research assistant.
- Adoption: Early users experiment with new products. Individuals and organizations try out these tools, figuring out what they're good for and where they fall short.
- Adaptation: Organizations, laws, and business models structurally change to use the technology. This is the final stage, where society rearranges to accommodate and benefit from the new technology.
Adaptation Is the Bottleneck
Of these four stages, Adaptation is by far the slowest and hardest. It requires more than good technology. It demands changes in human behavior, legal structures, and economic incentives.
Take corporate reluctance to deploy customer service chatbots:
- The Problem: Companies fear liability if a chatbot provides incorrect or harmful information. Air Canada learned this the hard way.
- The Solution: New business models are emerging. Startups now offer "insurance for the errors that generative AI products might make."
Real-world use depends on an entire ecosystem of supporting structures—insurance, regulations, new professional norms. These take time to build.
"Rapid Adoption" Doesn't Mean What You Think
A common claim is that AI is being adopted faster than any technology in history. Look closer at the data, historical context, and the unique nature of this technology. The picture gets more complex.
Quantity vs. Quality
One study claimed 40% of US adults use generative AI. Sounds big. Narayanan points out the flaw: this includes trivial use, like someone using ChatGPT once a week to write a limerick.
The metric that matters is "intensity of adoption." When researchers looked at how many work hours people used generative AI for, the contribution to labor productivity was "very, very small." True adoption isn't about how many people tried a tool. It's about how deeply it's integrated into essential daily tasks.
History Provides Perspective
The claim that today's adoption is unprecedented doesn't hold up well.
- Electricity: When factories first got electricity, it took years for them to become more efficient. They couldn't just replace a steam engine with an electric motor. They had to completely redesign the factory floor to take advantage of the new power source.
- Radio: A century ago, radios went from "nearly 0% of households to almost 100% of American households in something like 8 years." That's fast diffusion by any standard.
- Smartphones: While smartphone adoption felt fast to those who lived through it, Narayanan is "not at all convinced" it was faster than previous technological waves.
A Different View: This Time Might Be Different
Stacey Gray at the Future of Privacy Forum argues generative AI is different. The comparisons to electricity or radio don't fully work because of three factors:
- Zero Economic Cost: There's often no individual economic cost to start using advanced generative AI. That removes a major barrier.
- Few Institutional Barriers: A lawyer, judge, or teacher can begin using these tools for professional tasks without waiting for organizational approval or system-wide changes.
- Cognitive Offloading: The ability to offload mental work is "addictive," driving adoption in ways previous technologies did not.
The Real Costs Aren't Monetary
Narayanan's response: this view overlooks the costs slowing professional adoption. The real barriers aren't monetary:
- Reputational Risk: The consequences of being caught using AI irresponsibly are severe. Lawyers have been sanctioned by courts for submitting AI-generated legal filings with fabricated case citations.
- Legal Liability: Air Canada got held liable for its chatbot's mistakes.
- The Correction Process: Many sectors saw an initial wave of over-excitement followed by a "correction process" as real-world risks became clear. That leads to more cautious, slower adoption.
Why Misuse Spreads Faster
If productive, economic adoption is slow, why does it feel like AI is everywhere? Misuses of the technology get adopted much faster than productive uses.
Students using AI to cheat on homework is a prime example. The reason is simple: bad actors aren't constrained by the things that slow down responsible adoption—"responsible AI and compliance and... regulation."
This disparity is a direct consequence of the "free buzzsaw" model. The tool was distributed without necessary safety guards, making it far easier to misuse than to use responsibly.
Smarter Regulation for a Normal Technology
If AI follows a predictable (if slow) path of diffusion, our approach to governing it should be normal too. That means moving away from panic and focusing on what works.
Focus on Application, Not the Model
Trying to regulate underlying AI models during the invention stage is a "lost cause." Narayanan argues regulation "can and should focus on those steps of adoption and adaptation" because that's where the real "choke points are."
You don't regulate electricity at the power plant. You regulate its application in the home with building codes and safety standards. AI governance should focus on how tools are used in specific contexts, like medicine, finance, or law.
Regulation Can Enable Innovation
Contrary to the belief that regulation always stifles innovation, smart rules can be a "win-win" that helps industry. Clear rules reduce "uncertainty [and] risk for companies."
When businesses know the rules, they can "more confidently diffuse AI" into their products without fearing unpredictable legal or reputational damage.
Better Metrics, Better Questions
Much of the current discussion focuses on technical benchmarks and the abstract goal of "Artificial General Intelligence" (AGI). This focus is often a distraction.
Vast sums flow into what Narayanan calls "silly capability benchmarks"—like whether models can "escape from containment"—while funding for practical measurement of real-world use is scarce.
According to Goodhart's Law, as soon as a metric becomes a goal, it ceases to be a good metric. We should ask better questions and demand better data:
- Adoption Statistics: High-quality measurement of how many people use the technology and the nature of that use.
- Uplift Studies: Randomized trials to see how a tool helps (or hinders) a person in completing a specific task. Real-world evidence of impact.
- Transparency Reports: Generative AI companies should regularly report on how their models are being used and misused, similar to transparency standards that became the norm for social media platforms.
From Magic to Method
AI is not a magical force remaking our world in an instant. It's a "normal technology"—powerful, yes, but subject to the same real-world friction, economic realities, and human processes as electricity, radio, or the internet.
Its true impact will be shaped not by the speed of technical invention, but by the slow, messy process of human and institutional adaptation.
Our focus should shift from worrying about "catastrophic AI risks" to the more immediate work of understanding the "subtle ways in which this technology might chip away at" or "help shore up" the foundations of our society.
By treating AI as normal, we can move beyond the hype and begin the work of shaping its journey.


