9 Comments
User's avatar
Ravi Raman's avatar

I get the reason why people gravitate to making machines appear more natural (and software mimic the real world). But it gets old really quickly!

I think robots will take this to an extreme for a while, until people realize that it's better to have a strangely shapes machine that works super well instead of a human (or cat or dog) looking one that trips, falls and bumps into stuff!

Expand full comment
Pramodh Mallipatna's avatar

Yes treating it as a tool and not something beyond that is the right way to use it.

My related article - From Scaling to Bottleneck Era: AGI meets Data Wall

https://open.substack.com/pub/pramodhmallipatna/p/agi-meets-the-data-wall

Expand full comment
christopher caen's avatar

So glad someone finally said the quiet part out loud. It's as if no one read "full consciousness is all you need"...oh wait...my sources are telling me that it's actually "attention is all you need"

Expand full comment
Steven Sinofsky's avatar

Sorry comments were initially turned off. Fixed.

Expand full comment
Neal Freeland's avatar

What do you think of the analogy that "AI is an intern, it works hard but you have to check the work".

Expand full comment
The knowledge worker's avatar

So well said. As a user, I only care if o3 can get the jobs done or not. I really don't care if it "thinks" or not.

Expand full comment
Peter Buck's avatar

Apple proves AI is not human, just screwed up by humans. Thanks for the insight and I had to take a satirists view of Apple's attempt: **Siriously?? Apple Bites the Core of AI!** *Siri’s still buffering while ChatGPT steals the show.* https://www.linkedin.com/posts/peterbuck_wwdc-apple-ai-activity-7337850760224718848-VTvy?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAFYAQBgKKZ4egl-C6mubx2WaF1ENtK5nA

Expand full comment
khimru's avatar

I think you have understated in #3 the ACTUAL danger that comes out of all that: while AI couldn't “think” or “reason”… it could invent things very well. And then it becomes something that can be best characterized as self-propaganda (propaganda is defined as “information, especially of a biased or misleading nature, used to promote a political cause or point of view“… and LLMs are known to create such things when they select from their vast memory fact that try to support things expressed in prompts… or invent such “facts” when there are nothing that match).

Combine with the all the massive “AI is almost humans” drive and “thinking computers are right”… and you have MASSIVE danger to society – in the place which is almost entirely omitted in all these “safety studies”.

The danger, and QUITE REAL AND TANGIBLE is not that AI would try to do harm to other humans, but that HUMANS emboldened by their chats with AI would do that. And AI doesn't even need to be super-intelligent and malicious to cause harm that way, it just needs to be convincing… and AI as it exists today is VERY convincing!

Expand full comment
SocialEyes's avatar

Steve, an insightful and useful report. Even more relevant to our work, which is building an AI-first global healthcare platform for the LMICs, was the separate post about Clippy. Our platform will be used almost exclusively by the Instagram generation, in several cultures, and how do we support them in a way they will accept and use? We're technology guys, so how can we add a human guru, plus a sidekick per the Disney recommendation? Maybe we don't add it, but we make is a core experience for the social media age.

Expand full comment