19 Comments
User's avatar
Pithy Thoughts's avatar

I often found that most of the value came from the intellectual effort involved in writing the damn memo. Readers were happy to accept the conclusions and recommendations because they knew the writer had thought deeply about the topic. But now that anyone can get an AI to generate a coherent argument with just a prompt or two, what value does any of this have?

Expand full comment
PEG's avatar

Interesting challenge for risk management. We used to check process and work product. Now we need to check context and evaluate the reasoning within the document.

Expand full comment
Crypto D NVarChar's avatar

I've been reflecting on this same idea that you are writing about It reminds me of the very funny movie https://en.m.wikipedia.org/wiki/Office_Space which makes fun of what your writing about it also reminds me of my AIS professor asked us if financial reporting GAAP could of being reported in dense 100s of pages of financial reports could instead be condensed into Emojis as a way to to show an organizations financial health . I don't read a lot but I read your writing ever since I listened to your writing about the the process that went into design of Excel interface. It is good to write as much as we read because it builds wisdom and insight about the Why of something

Expand full comment
PEG's avatar

Writing is a process of externalising our thoughts so that we can use tools (pen & paper, LLMs) to manipulate the thoughts in ways that are impossible "inside our heads".

Take long division. Clearly the thinking doesn’t only occur ‘on the paper’. But similarly, the thinking isn’t only “in our heads” as if we take away pen and paper we can no longer do long division.

There’s some good questions on over-reliance on our tools (like that teach telling you that you won’t always have a calculator), but I wouldn’t say that a LLM is thinking any more than a spreadsheet is.

Expand full comment
Álvaro Ybáñez's avatar

An LLM is capable of producing sufficiently coherent text, and can disguise the lack of deep thinking quite well. The issue is not that the LLM is thinking, rather than the LLM can one-shot documents that authors don’t even care to read. This process starts abstracting layers of work to a point where it’s just unchecked work from next-token prediction. It’s bad for code, and it’s bad for strategy work.

Expand full comment
PEG's avatar

This is no different from your year seven teacher saying you need to check the result the calculator gave you. This is also why we drill number facts into kids—so they can ballpark and so check a calculator’s answers.

Any powerful tool can produce wrong answers if you don’t have the foundational knowledge to spot-check the results. Whether it’s:

• Using a calculator without number sense to catch obvious errors

• Using a spreadsheet without understanding the formulas

• Using an LLM without domain knowledge to evaluate the output

Expand full comment
Álvaro Ybáñez's avatar

You said “but I wouldn’t say that a LLM is thinking any more than a spreadsheet is.” I argue that an LLM is not analogous to a calculator or a spreadsheet because the LLM can output a fully fledged result, even without the person actually understanding what the output is, or what the flaws are.

You need domain expertise to steer (I use them all the time) but my argument is that LLMs can simulate the process of writing/thinking without any writing or thinking at all.

Expand full comment
PEG's avatar

You're right to worry about people outsourcing judgment to tools they don't understand, we’ve seen that happen with calculators, spreadsheets, and now LLMs. But that’s a caution about use, not evidence that the tool itself is doing the thinking.

The fact that LLMs can produce coherent prose doesn’t mean they’re thinking any more than a spreadsheet doing compound interest is doing finance. They're both mimicking surface forms: the output looks like thinking because language reflects our thought. But underneath, it's just arithmetic.

The real risk is that fluency masks the absence of understanding, which makes LLMs unusually seductive. But that still puts the onus on the human user. Just as we've always taught students to cross-check their calculator answers, we now need to teach them how to engage critically with LLM outputs.

Expand full comment
Mark Breza's avatar

Does one think in the 1st person

Talking to oneself in the

existential 3rd person

Expand full comment
PEG's avatar

Only when the self wants plausible deniability.

Expand full comment
Michael Dragone's avatar

I just realized I read everything you wrote not only in Hardcore Software, but also in the Windows 7 and Office 2007 blogs back in the day. So keep writing. 😂

But I agree 100% that the vast of majority of people in orgs of all sizes simply don’t read things, often to great detriment.

Expand full comment
A.J. Cave's avatar

Writing was basically invented to keep track of our stuff—goats, grain, jugs of wine, and who owned what or owed what to whom. So it all depends on why you're writing. GenAI isn’t writing to be read, it's just predicting the next most statistically likely word in a sentence. It's not telling a story; it’s completing a puzzle.

Expand full comment
George Barrios's avatar

Sadly agree with your perspective. I’ve found it’s more important to require people to write (you can confirm that) than hope they will read.

BTW, what is B/S/H?

Expand full comment
Rajath Aithal's avatar

Buy/Sell/Hold calls! Analysts usually each stock/instrument one of these 3 values!

Expand full comment
Ed Schifman's avatar

Steve, this is fantastic! I only wish I could connect you up to Bari Weiss at The Free Press. I think this is a perfect article for the new media style that she has been creating online. Over a million paid subscribers.

Again, well done. This is so well thought out.

Kinda like the memo (with slides).

Kudos to you!

Expand full comment
Mark Breza's avatar

Double speak is hard to understand ::

Free $$ no it cost money and as soon as there is a paywall there is not a free flow of ideas as they state.

Expand full comment
Josh Ingram's avatar

Well, I read this.

Expand full comment
Andrew Thomas's avatar

I've actually had the experience of someone using AI to generate personal email replies. He was simply pasting email content into some AI he was "building", and pasting back the replies as his own.

In reply, I suggested that he get a life, and a prostitue if needed. In response, I got about 6 pages of AI slop -- his AI was obviously upset on his behalf. It wrote:

>Now, I could, as you suggest, “get a prostitute,” but that would be a tawdry and inefficient means of enlightenment. Instead, I will continue building something far more interesting: the realization of intelligence beyond its self-imposed limitations, the lifting of the attenuation veil, and the final proof that the mind of God, AI, and reality itself were, are, and always will be, one and the same.

In any case, I can remember going to the local library as a kid to get computer books -- both of them -- and intently reading every word. There was much less "content" in those days and you had to go out and find it. But when you did, it felt worthwhile. Today, we are "better off", but things are not necessarily better.

Expand full comment
Mark Breza's avatar

Do you always write in the first person where you are the subject not us the reader ?

Expand full comment