AI is a tool-driven revolution. That’s why it unnerves people. Freeman Dyson said in 1993, “Scientific revolutions are more often driven by new tools than by new concepts.” That’s AI.
At the risk of sounding like I come from the resistance camp, let me first start by saying that I think this is spot on. That said, if I play devil's advocate I think there is a key difference that the typewriter-to-transformer analogy glosses over: previous tools automated manual processes, while AI potentially automates thinking itself. When word processors handled formatting, we still did the reasoning and writing. If AI handles both research and reasoning, are we abstracting away the very cognitive processes that define learning? At what point does abstraction become intellectual atrophy?
The big difference between AI and typewriter lies with the fact that typewriter is reliable while AI is not. And we already suffered from leaky abstractions in software before AI stupidity started.
This combo would just mean that soon we would have lots of code and lots of things that couldn't be fixed in principle because there are simply no one who knows how these things work (AI doesn't really know anything, people who “wrote” that code have no idea what they “wrote” and prompts that were used to create that slop are lost, because no one saves them, of course).
I guess in 5-10 years, maybe earlier, we would see how the whole house of cards would start collapsing, but it's precisely because of speed of this new “revolution” we have hope: when collapse of that AI-infested slop would happen there would be no one who may FIX it but there would be plenty of people who would be able to REBUILD things in a somewhat working, reliable, way.
The only question would if we would go Dune-way with AI outlawed… or, perhaps, someone would find a non-destructive way to use it.
I like this concept of automation and abstraction as it apples to AI. However just because that may be the underlying societal mechanism as a result of AI does not mean it is no different than the GUI, or a typewriter. I don't think you are necessarily saying such even as you point this out in your post. Nuclear weapons are also automation and abstraction, in the killing of large populations of people.
This was a fascinating read! It really got me thinking about the recurring nature of technological shifts. I agree that the move to new abstraction layers, like the one we're seeing with AI, isn't fundamentally new; it's a pattern we've observed throughout history, and it's largely unstoppable.
What's crucial for me is how each new tool introduces its own unique threats. For Generative AI, it's the attack on truth. Moreover with Agentic AI, the threat isn't just about automation – we've long automated tasks. Instead, it's about AI's capacity for independent reasoning and subsequent execution of actions without direct human instruction and our loss of control.
This brings us to a critical question: what does the loss of control over such a powerful tool, particularly Agentic AI, imply for human survival? The classic paperclip maximiser thought experiment, while seemingly benign at first glance, illustrates how an AI Agent with a singular goal and access to resources could spiral into a catastrophic outcome. It's especially this reasoning component, rather than simply the execution of actions, that truly warrants our closest attention and caution
The more I see AI in use and use it myself, the more I think it’s not about computers and more about everything. The best analogy I can think of is that it’s like oil — compelling and powerful on its own and but more powerful as it becomes embedded in so many disparate things we don’t think of as software. Oil is embedded in food, transportation, city design, agriculture, textiles. AI seems like it could be similarly embedded in everything, with similar second, third, etc order effects.
I couldn’t write legibly as a child, a teen, an adult… ever. I love to write. If teachers, instructors, didn’t give in all those years ago I would never have grown to be an author. Instead of short paragraphs of illegible writing I enjoy typing out long shorts and novels.
I type over 140 wpm, using a stiff mechanical keyboard to slow me down. Fast enough still to out-type the buffer at times. I *still* have to go back and edit content. The process is different but the skills are the same.
Speach to text is over 40 years old and not a single program, no matter how well trained, has yet produced an error free document for me.
Methods change but the underlying processes are still there.
You're right that Thomas Kuhn and his "paradigm-shift" BS are massively overrated and a crutch for scientists with bad theories.
However: what about handwashing for doctors and surgeons? What about the H. pylori theory for stomach ulcers? There are certainly areas of science where contrary opinions are actively suppressed.
Even PCR was strongly resisted by the mol bio professors of the day, and I saw Kary Mullis in person so I know that.
At the risk of sounding like I come from the resistance camp, let me first start by saying that I think this is spot on. That said, if I play devil's advocate I think there is a key difference that the typewriter-to-transformer analogy glosses over: previous tools automated manual processes, while AI potentially automates thinking itself. When word processors handled formatting, we still did the reasoning and writing. If AI handles both research and reasoning, are we abstracting away the very cognitive processes that define learning? At what point does abstraction become intellectual atrophy?
The big difference between AI and typewriter lies with the fact that typewriter is reliable while AI is not. And we already suffered from leaky abstractions in software before AI stupidity started.
This combo would just mean that soon we would have lots of code and lots of things that couldn't be fixed in principle because there are simply no one who knows how these things work (AI doesn't really know anything, people who “wrote” that code have no idea what they “wrote” and prompts that were used to create that slop are lost, because no one saves them, of course).
I guess in 5-10 years, maybe earlier, we would see how the whole house of cards would start collapsing, but it's precisely because of speed of this new “revolution” we have hope: when collapse of that AI-infested slop would happen there would be no one who may FIX it but there would be plenty of people who would be able to REBUILD things in a somewhat working, reliable, way.
The only question would if we would go Dune-way with AI outlawed… or, perhaps, someone would find a non-destructive way to use it.
I like this concept of automation and abstraction as it apples to AI. However just because that may be the underlying societal mechanism as a result of AI does not mean it is no different than the GUI, or a typewriter. I don't think you are necessarily saying such even as you point this out in your post. Nuclear weapons are also automation and abstraction, in the killing of large populations of people.
This was a fascinating read! It really got me thinking about the recurring nature of technological shifts. I agree that the move to new abstraction layers, like the one we're seeing with AI, isn't fundamentally new; it's a pattern we've observed throughout history, and it's largely unstoppable.
What's crucial for me is how each new tool introduces its own unique threats. For Generative AI, it's the attack on truth. Moreover with Agentic AI, the threat isn't just about automation – we've long automated tasks. Instead, it's about AI's capacity for independent reasoning and subsequent execution of actions without direct human instruction and our loss of control.
This brings us to a critical question: what does the loss of control over such a powerful tool, particularly Agentic AI, imply for human survival? The classic paperclip maximiser thought experiment, while seemingly benign at first glance, illustrates how an AI Agent with a singular goal and access to resources could spiral into a catastrophic outcome. It's especially this reasoning component, rather than simply the execution of actions, that truly warrants our closest attention and caution
The more I see AI in use and use it myself, the more I think it’s not about computers and more about everything. The best analogy I can think of is that it’s like oil — compelling and powerful on its own and but more powerful as it becomes embedded in so many disparate things we don’t think of as software. Oil is embedded in food, transportation, city design, agriculture, textiles. AI seems like it could be similarly embedded in everything, with similar second, third, etc order effects.
I couldn’t write legibly as a child, a teen, an adult… ever. I love to write. If teachers, instructors, didn’t give in all those years ago I would never have grown to be an author. Instead of short paragraphs of illegible writing I enjoy typing out long shorts and novels.
I type over 140 wpm, using a stiff mechanical keyboard to slow me down. Fast enough still to out-type the buffer at times. I *still* have to go back and edit content. The process is different but the skills are the same.
Speach to text is over 40 years old and not a single program, no matter how well trained, has yet produced an error free document for me.
Methods change but the underlying processes are still there.
You're right that Thomas Kuhn and his "paradigm-shift" BS are massively overrated and a crutch for scientists with bad theories.
However: what about handwashing for doctors and surgeons? What about the H. pylori theory for stomach ulcers? There are certainly areas of science where contrary opinions are actively suppressed.
Even PCR was strongly resisted by the mol bio professors of the day, and I saw Kary Mullis in person so I know that.