13 Comments
User's avatar
Bocar Dia's avatar

At the risk of sounding like I come from the resistance camp, let me first start by saying that I think this is spot on. That said, if I play devil's advocate I think there is a key difference that the typewriter-to-transformer analogy glosses over: previous tools automated manual processes, while AI potentially automates thinking itself. When word processors handled formatting, we still did the reasoning and writing. If AI handles both research and reasoning, are we abstracting away the very cognitive processes that define learning? At what point does abstraction become intellectual atrophy?

Expand full comment
PWho's avatar
Jun 3Edited

Not just manual processes, but supporting processes. Grinding ink, making corrections, writing the letters in the proper shape, that was all mechanical work necessary to carry the message. Automation helped you transcribe your message, and in the process also freed you to devote more attention to the message. If the LLM writes the essay or letter, you have offloaded the essential element, not just the scaffolding around it.

It’s interesting that conversation is so different from other conversations, where humans were once struggling to teach computers a mechanical task (such as language translation) that LLMs are incredibly good at. Few are concerned about those developments.

Expand full comment
khimru's avatar

It's not clear how LLMs would affect work, but so far they managed to entirely nuke the whole education process. Because essays were never the goals of education. Not even the ability to write essays were the goals! It's the ability to do some honest work (that may, then, be used as basis for the essays) that's the goal… that's where the infamous years-long “and here we will make the transformer core out of wood because no one will read to this point anyway” joke comes from.

And educators understand that perfectly: I know of a friend who worked with me for a diploma project and, ultimately, wrote the text using ChatGPT 3.5 (very early version) with pile of nonsense there. Still got good marks because teachers saw what he did (something that would be used by millions of people) and were sure that was good work.

But for the most students essay is the goal: write essay, get diploma… and what's next? The “weight” of a diploma was going down in recent times and now, with LLMs… it have become an entirely pointless piece of paper.

It's as if someone saw that factory dumps a lot of waste into the river and made a “better” one that creates 10 or 100 times more waste… except the goal of the factory was to produce bicycles and “improved” version doesn't make them at all, just piles of waste.

Expand full comment
khimru's avatar

The big difference between AI and typewriter lies with the fact that typewriter is reliable while AI is not. And we already suffered from leaky abstractions in software before AI stupidity started.

This combo would just mean that soon we would have lots of code and lots of things that couldn't be fixed in principle because there are simply no one who knows how these things work (AI doesn't really know anything, people who “wrote” that code have no idea what they “wrote” and prompts that were used to create that slop are lost, because no one saves them, of course).

I guess in 5-10 years, maybe earlier, we would see how the whole house of cards would start collapsing, but it's precisely because of speed of this new “revolution” we have hope: when collapse of that AI-infested slop would happen there would be no one who may FIX it but there would be plenty of people who would be able to REBUILD things in a somewhat working, reliable, way.

The only question would if we would go Dune-way with AI outlawed… or, perhaps, someone would find a non-destructive way to use it.

Expand full comment
Ralph Case's avatar

This statement is correct that to be useful, we need AI to be reliable. Today it is sometimes, but often it is not. We have more to learn about what tasks are reliably automated with these "AI" models.

But it's not correct that things can't be fixed in principle because no one knows who they work. When CS students wrote code without any understanding of how transistors work, that didn't keep them from writing good code and understanding what it's doing.

When the tools are reliable at proving a new abstraction, we don't need to understand all the abstractions at once. That's the power of our coding tools and environments.

We shouldn't judge "AI" tools by the code they create; we should judge them by the inputs to the tools. What abstractions are we working with when we use them?

Expand full comment
khimru's avatar

“We have more to learn about what tasks are reliably automated with these "AI" models“.

How about “nothing”? Neural network reliability issues were discussed half-century ago (yes, I'm not making it up!).

The idea that you may pour billions, trillions, bazillion dollars and spend all Earth resources, like Eric Schmidt suggests – and then it would magically disappear is absurd.

Indeed “CS students wrote code without any understanding of how transistors work”… precisely because transistors work so reliably: when transistor have one failure per 10¹⁹ operations (typical MTBF for transistors!) one doesn't need to know how that things works… it's enough to know that it does.

When MTBF is around 100 operations (typical 1% of “hallucinations” for LLMs network) then you couldn't build ANYTHING on top of that – not matter what you building things would work for a VERY short time till they would faill apart.

“What abstractions are we working with when we use them?”

LLM. Failure rare approximately 1%… would have been acceptable if it were RELIABLY 1% (there are ways to do things that are reliable from components that are consistently unreliable), but no, that's not the case: all LLMs work more reliable when they process “good inputs” and degenerate into utter nonsense (with 90% or more of nonsense generated) when pushed… with NO warnings to help you understand when that is happening.

Ergo: they can only be used in areas where you are an expert and can validate their output… but why would you need help in areas where you are an expert?

You need help with ereas where you are struggling… and that precisely where LLMs are useless.

Expand full comment
Jim Weil's avatar

I like this concept of automation and abstraction as it apples to AI. However just because that may be the underlying societal mechanism as a result of AI does not mean it is no different than the GUI, or a typewriter. I don't think you are necessarily saying such even as you point this out in your post. Nuclear weapons are also automation and abstraction, in the killing of large populations of people.

Expand full comment
Agile Chris's avatar

This was a fascinating read! It really got me thinking about the recurring nature of technological shifts. I agree that the move to new abstraction layers, like the one we're seeing with AI, isn't fundamentally new; it's a pattern we've observed throughout history, and it's largely unstoppable.

What's crucial for me is how each new tool introduces its own unique threats. For Generative AI, it's the attack on truth. Moreover with Agentic AI, the threat isn't just about automation – we've long automated tasks. Instead, it's about AI's capacity for independent reasoning and subsequent execution of actions without direct human instruction and our loss of control.

This brings us to a critical question: what does the loss of control over such a powerful tool, particularly Agentic AI, imply for human survival? The classic paperclip maximiser thought experiment, while seemingly benign at first glance, illustrates how an AI Agent with a singular goal and access to resources could spiral into a catastrophic outcome. It's especially this reasoning component, rather than simply the execution of actions, that truly warrants our closest attention and caution

Expand full comment
James Sherrett's avatar

The more I see AI in use and use it myself, the more I think it’s not about computers and more about everything. The best analogy I can think of is that it’s like oil — compelling and powerful on its own and but more powerful as it becomes embedded in so many disparate things we don’t think of as software. Oil is embedded in food, transportation, city design, agriculture, textiles. AI seems like it could be similarly embedded in everything, with similar second, third, etc order effects.

Expand full comment
Albert Cory's avatar

You're right that Thomas Kuhn and his "paradigm-shift" BS are massively overrated and a crutch for scientists with bad theories.

However: what about handwashing for doctors and surgeons? What about the H. pylori theory for stomach ulcers? There are certainly areas of science where contrary opinions are actively suppressed.

Even PCR was strongly resisted by the mol bio professors of the day, and I saw Kary Mullis in person so I know that.

Expand full comment
Steven Postrel's avatar

I am going through this right now with masters students in business school. As I always graded written material more critically than most instructors, the advent of GenAI is as much an opportunity to raise standards as it is for students to "phone it in."

But... you need to be much more clear about what is being "abstracted" out of the student's workflow, and why that element would supposedly no longer be needed in a world with ubiquitous AI. (Even with calculators replacing slide rules, many instructors noted at the time that students were losing the sense of how big the inputs and outputs of calculations ought to be, so that they would turn in answers off by orders of magnitude. That sense of what ballpark was reasonable was cultivated by hand algorithms or tables or slide rules (especially).)

Just because Claude can often knock out a pretty good answer to one of my numerical word problems about competitive advantage doesn't mean that the student using Claude could rely on it in the future in similar situations. Nor does it imply that there is nothing to be gained by solving it with a brain and a calculator rather than an AI. The pedagogical point of that sort of word problem, which is far simpler than any real-world case, is for the student to be able to grasp the principle that is numerically embodied in it. A student who gets an AI answer without effort (and without even reading the output) will not thereby be capable of using the pattern exemplified in the problem to think about new business situations. Nor will he be capable of setting up a similar prompt in a new situation for the AI to "solve."

This situation therefore diverges from the examples of typewriter and calculator. It is more like a rich kid hiring a somewhat reliable "tutor" to do his homework and take his tests for him. Even if he stays rich and can keep this person around for the rest of his life, real-world problems are not constructed and assigned by teachers, so he won't be able to succeed in the same way after schooling. It is important that he internalize the processes and skills that his tutor had applied in his stead.

Expand full comment
Martin Soler's avatar

I was speaking at a conference on AI for hospitality a few weeks ago. Most people were cautious, even skeptical about AI in hospitality. Even AI researchers. A tool gets adoption because it removes time/friction to getting an end result. AI does that. Will we get lazy? Probably, I got lazy in school I never learned my tables. I explained to my teachers that I don't want to waste my mental bandwidth with things that a tool (calculator) can do - and would rather focus on real problem solving. Needless to say, they weren't very happy with me. Socially, it is still embarrassing. But intellectually - I'm Ok.

We need to teach critical thinking. That's really the main thing we need our kids to learn because AI answers will be biased and it will be more subtle than black-hat SEO tactics. But still we will need to learn to challenge what we see and read. But that was the same with books.

My mini-rant on the same topic: https://martinsoler.substack.com/i/164141631/resistance-and-doubt-the-real-risks-for-ai-in-hospitality

Expand full comment
John Clark's avatar

I couldn’t write legibly as a child, a teen, an adult… ever. I love to write. If teachers, instructors, didn’t give in all those years ago I would never have grown to be an author. Instead of short paragraphs of illegible writing I enjoy typing out long shorts and novels.

I type over 140 wpm, using a stiff mechanical keyboard to slow me down. Fast enough still to out-type the buffer at times. I *still* have to go back and edit content. The process is different but the skills are the same.

Speach to text is over 40 years old and not a single program, no matter how well trained, has yet produced an error free document for me.

Methods change but the underlying processes are still there.

Expand full comment