How does a program get excited? It's a program, it doesn't have emotions. It's not producing a faux-emotion in the way a "not-duck quacks", it lacks them entirely. Any emotion you read from an LLM is anthropomorphism, and that's what I find odious.
Yes, I know it's not conscious in the same way as a living biological thing is. Yes, we all know you know that too. Nobody is being fooled.
I don't think this is a good example, how else would you describe what the script is actively doing using English? There's a difference between describing something and anthropomorhpizing it.
> We say that a flaky integration "doesn't feel like working today".
When people say this they're doing it with a tongue in their cheek. Nobody is actually prescribing volition or emotion to the flaky integration. But even if they were, the difference is that there isn't an entire global economy propped up behind convincing you that your flaky integration is nearing human levels of intelligence and sentience.
> Nobody is being fooled.
Are you sure about that? I'm entirely unconvinced that laymen out there – or, indeed, even professionals here on HN – know (or care about) the difference, and language like "it got excited and decided to send me a WhatsApp message" is both cringey and, frankly, dangerous because it pushes the myth of AGI.
And I don't think AGI is a "myth." It may or may not be achieved in the near future with current LLM-like techniques, but it's certainly not categorically impossible just because it won't be "sentient".