12 Comments
User's avatar
Ravi Raman's avatar

I get the reason why people gravitate to making machines appear more natural (and software mimic the real world). But it gets old really quickly!

I think robots will take this to an extreme for a while, until people realize that it's better to have a strangely shapes machine that works super well instead of a human (or cat or dog) looking one that trips, falls and bumps into stuff!

Expand full comment
Pramodh Mallipatna's avatar

Yes treating it as a tool and not something beyond that is the right way to use it.

My related article - From Scaling to Bottleneck Era: AGI meets Data Wall

https://open.substack.com/pub/pramodhmallipatna/p/agi-meets-the-data-wall

Expand full comment
christopher caen's avatar

So glad someone finally said the quiet part out loud. It's as if no one read "full consciousness is all you need"...oh wait...my sources are telling me that it's actually "attention is all you need"

Expand full comment
Steven Sinofsky's avatar

Sorry comments were initially turned off. Fixed.

Expand full comment
Robert Ruzitschka's avatar

Anthropomorphizing is a real thing, no doubt. It is a shame that is done by people who know better to earn money. But let's be clear - as you are mentioning, people tend to believe machines and LLMs are absolutely easy to believe as they tick many psychological boxes. In that sense there is a huge difference between interacting with VisiCalc or ChatGPT.

Expand full comment
Soul sag's avatar

I get it, but we’ve been anthropomorphising tools for millennia - from boats to cars to guns. It’s what we do with tools to a degree.

But the subjective relationship formed with AI can be much deeper than, say, an axe. And so I don’t think there’s much hope pulling away from this. It may not ‘reason’ in the way humans do, but it barely matters at the level of subjective response.

Expand full comment
David Intersimone's avatar

I continue to watch the discussions about the directions, impacts understandings about "Modern AI". I do get involved (from time to time) with people about "the good, the bad and the ugly" worlds and uses of technology. Apple's (and others) research paper and thoughts strikes me as a kind of 'point in time" analysis of yet another ever evolving technology that is its way to something.

Just like fire, the wheel, lights, weapons, vehicles, computers and every other innovation that helps us move forward (sometimes with halting stops along the way), "modern AI" is another one of these tools to evaluate for use and/or abuse.

At first glance, why would Apple Research come out with a paper that is related to business moves that Apple is making? I remember a time when developers were embracing the C++ language. Bill Gates was quoted as saying (I'm paraphrasing) that C++ was "not ready for prime time".

I was working at Borland at the time and we had a laugh when we read the BillG interview with that statement and said that what could really be saying was that C++ will be ready for prime time when Microsoft had a C++ compiler.

Maybe this Apple Research paper was hinting at a similar stance - that AI will be ready for prime time when Apple Research and Engineering says it is ready?

David I.

Expand full comment
Neal Freeland's avatar

What do you think of the analogy that "AI is an intern, it works hard but you have to check the work".

Expand full comment
The knowledge worker's avatar

So well said. As a user, I only care if o3 can get the jobs done or not. I really don't care if it "thinks" or not.

Expand full comment
Peter Buck's avatar

Apple proves AI is not human, just screwed up by humans. Thanks for the insight and I had to take a satirists view of Apple's attempt: **Siriously?? Apple Bites the Core of AI!** *Siri’s still buffering while ChatGPT steals the show.* https://www.linkedin.com/posts/peterbuck_wwdc-apple-ai-activity-7337850760224718848-VTvy?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAFYAQBgKKZ4egl-C6mubx2WaF1ENtK5nA

Expand full comment
khimru's avatar

I think you have understated in #3 the ACTUAL danger that comes out of all that: while AI couldn't “think” or “reason”… it could invent things very well. And then it becomes something that can be best characterized as self-propaganda (propaganda is defined as “information, especially of a biased or misleading nature, used to promote a political cause or point of view“… and LLMs are known to create such things when they select from their vast memory fact that try to support things expressed in prompts… or invent such “facts” when there are nothing that match).

Combine with the all the massive “AI is almost humans” drive and “thinking computers are right”… and you have MASSIVE danger to society – in the place which is almost entirely omitted in all these “safety studies”.

The danger, and QUITE REAL AND TANGIBLE is not that AI would try to do harm to other humans, but that HUMANS emboldened by their chats with AI would do that. And AI doesn't even need to be super-intelligent and malicious to cause harm that way, it just needs to be convincing… and AI as it exists today is VERY convincing!

Expand full comment
SocialEyes's avatar

Steve, an insightful and useful report. Even more relevant to our work, which is building an AI-first global healthcare platform for the LMICs, was the separate post about Clippy. Our platform will be used almost exclusively by the Instagram generation, in several cultures, and how do we support them in a way they will accept and use? We're technology guys, so how can we add a human guru, plus a sidekick per the Disney recommendation? Maybe we don't add it, but we make is a core experience for the social media age.

Expand full comment