232. From Typewriters to Transformers: AI is Just the Next Tools Abstraction
AI is a tool-driven revolution. That’s why it unnerves people. Freeman Dyson said in 1993, “Scientific revolutions are more often driven by new tools than by new concepts.” That’s AI.
AI is a tool-driven revolution. That’s why it unnerves people. Freeman Dyson said in 1993, “Scientific revolutions are more often driven by new tools than by new concepts.” That’s AI.
For those deep in tech, AI is clearly a new paradigm—a sweeping theorem of software. Most paradigm shifts precede new tools. But with AI, we fast-forwarded: from paradigm to tools in under a decade, now used by hundreds of millions. That speed doesn’t soften the typical reaction to new tools—fear, skepticism, even rejection. We’ve seen this with every major shift in computing. This post shares a few examples of that resistance in action.

On Bari Weiss’ Honestly podcast, a recent debate tackled: Will the Truth Survive Artificial Intelligence?
The “yes” side featured Aravind Srinivas (Perplexity) and Fei-Fei Li (Stanford); the “no” side, Jaron Lanier (Microsoft) and Nicholas Carr. I won’t spoil the debate, but a major theme was concern about learning, writing, and education.
AI is a tool. Tools abstract and automate tasks. Each new one adds abstraction and automation over what came before. That’s rarely controversial—until it touches something people are emotionally or professionally invested in.
Case in point: teachers once opposed typewriters in class. They worried students would forget how to write. They were right—by college, I could barely write cursive. But typing papers was faster and easier to grade. Teachers opposed calculators for fear of failing to add. But now engineers skip the slide rule and get vastly more done with libraries of routines and more.
My freshman year (1983) was a turning point. Two quick stories:
First: Most students arrived with typewriters—graduation gifts. You were expected to turn in typed papers. Rich kids had “fancy” models that let you backspace before a line was committed to paper.
Meanwhile, the university had a few WANG word processors—business-grade machines—available to select writing sections. Faculty were worried: if students didn’t handwrite first drafts, would they learn to write at all? That exact fear came up in the podcast too.
So we ran an experiment. Most students used pen and paper, then typed. A few of us used WANG machines for everything. Faculty planned to compare the results.
Then came January 1984. Macintosh launched. Apple pushed them onto campuses. What the faculty hoped to study was rendered moot overnight. The tool leapfrogged the debate—just like GPT did two years ago.
The real issue wasn’t just speed. It was abstraction. Word processors offloaded spell check, formatting, and editing—freeing us to focus on content. Educators already complained about poor spelling before grammar checkers showed up.
Second: I was a computer science major. CS was a new discipline then—separated from electrical engineering in the 1960s. Its foundation? Abstraction: you didn’t need to solder circuits to build software.
This wasn’t universally accepted. Many schools kept CS under engineering, requiring EE and physics. That meant you still learned transistors to write code. My school dropped that. We were among the first CS majors who didn’t take physics or EE—and some argued we’d never truly understand computers.
They were wrong. That too was an abstraction.
AI is the next abstraction layer.
And like all previous abstractions, it’s criticized on two fronts:
Loss of fundamentals: New users won’t understand what came before. That’s true. But also true: they can do far more than previous generations. Abstraction is about not needing the old tricks. No one misses manually hyphenating or footnoting on a typewriter.
Lack of understanding: Critics say people won’t know how their AI-generated results were made. That’s a weak argument. When a carpenter uses a nail gun, do we say they no longer understand roofing? I know what my computer is doing even if I’m not flipping bits manually.
So why the negativity around AI in learning? Is it just a replay of new technology in schools?
It’s not unique or new. People say students will get lazy, not “really” understand, or miss what “matters.” The same was said about word processors. And Macs. And dropping EE courses.
Growing up, we were drilled on how to find things in the library—but never allowed to use the encyclopedia. That was “cheating.” Odd, since many families invested in full encyclopedia sets.
Then I discovered the almanac. Game over. I won every classroom research contest using that book. We bought a new edition every year. It felt obvious. That instinct is why my dad bought an early PC. My first thought going online? “Now I’ve got a real-time almanac—at 300 baud.”
What we hear today about AI—worries about truth, AGI, or education—isn’t really about those things. They’re dressed-up ways of resisting change.
Writing and learning with AI is the typewriter, the word processor, the encyclopedia, and the almanac rolled into one. Seeing it as something scarier than that is just fear of new tools and new paradigms. Again. It is not surprise that we're seeing so much writing about concerns—writers are the ones who are directly challenged. Just as electrical engineers were challenged by software abstracting out hardware.
Some will say not all abstraction layers are created equally. Not all tools are “harmless”. And they would conclude if they believe that about AI that AI needs more scrutiny sooner and that we should slow down before we understand. The challenge is the future doesn’t just wait around for everyone to come to a consensus. It arrives with new tools in hand. That’s what happened in 1984 when Macintosh arrived. That’s what is happening with AI.
AI is here. It’s already happening.
At the risk of sounding like I come from the resistance camp, let me first start by saying that I think this is spot on. That said, if I play devil's advocate I think there is a key difference that the typewriter-to-transformer analogy glosses over: previous tools automated manual processes, while AI potentially automates thinking itself. When word processors handled formatting, we still did the reasoning and writing. If AI handles both research and reasoning, are we abstracting away the very cognitive processes that define learning? At what point does abstraction become intellectual atrophy?
The big difference between AI and typewriter lies with the fact that typewriter is reliable while AI is not. And we already suffered from leaky abstractions in software before AI stupidity started.
This combo would just mean that soon we would have lots of code and lots of things that couldn't be fixed in principle because there are simply no one who knows how these things work (AI doesn't really know anything, people who “wrote” that code have no idea what they “wrote” and prompts that were used to create that slop are lost, because no one saves them, of course).
I guess in 5-10 years, maybe earlier, we would see how the whole house of cards would start collapsing, but it's precisely because of speed of this new “revolution” we have hope: when collapse of that AI-infested slop would happen there would be no one who may FIX it but there would be plenty of people who would be able to REBUILD things in a somewhat working, reliable, way.
The only question would if we would go Dune-way with AI outlawed… or, perhaps, someone would find a non-destructive way to use it.