zlacker

[parent] [thread] 1 comments
1. nozzle+(OP)[view] [source] 2026-01-26 22:48:04
> We say that a shell script "is trying to open this file".

I don't think this is a good example, how else would you describe what the script is actively doing using English? There's a difference between describing something and anthropomorhpizing it.

> We say that a flaky integration "doesn't feel like working today".

When people say this they're doing it with a tongue in their cheek. Nobody is actually prescribing volition or emotion to the flaky integration. But even if they were, the difference is that there isn't an entire global economy propped up behind convincing you that your flaky integration is nearing human levels of intelligence and sentience.

> Nobody is being fooled.

Are you sure about that? I'm entirely unconvinced that laymen out there – or, indeed, even professionals here on HN – know (or care about) the difference, and language like "it got excited and decided to send me a WhatsApp message" is both cringey and, frankly, dangerous because it pushes the myth of AGI.

replies(1): >>apetre+ir
2. apetre+ir[view] [source] 2026-01-27 01:42:12
>>nozzle+(OP)
I think you're conflating two different things. It's entirely possible (and, I think, quite likely) that AI is simultaneously not anthropomorphic (and is not ACTUALLY "excited" in the way I thought you were objecting to earlier), but also IS "intelligent" for all intents and purposes. Is it the same type and nature as human intelligence? No, probably not. Does that mean it's "just a flaky integration" and won't have a seismic effect on the economy? I wouldn't bet on it. It's certainly not a foregone conclusion, whichever way it ends up landing.

And I don't think AGI is a "myth." It may or may not be achieved in the near future with current LLM-like techniques, but it's certainly not categorically impossible just because it won't be "sentient".

[go to top]