The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.
We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.
To be honest the impression I've gotten is that some people are just very interested in talking about not anthropomorphizing AI, and less interested in talking about AI behaviors, so they see conversations about the latter as a chance to talk about the former.
I asked Claude to write a E-AC3 audio component so I can play videos with E-AC3 audio in the old version of QuickTime I really like using. Claude's decoder includes the ability to write debug output to a log file, so Claude is studying how QuickTime and the component interact, and it's controlling QuickTime via Applescript.
Sometimes QuickTime crashes, because this ancient API has its roots in the classic Mac OS days and is not exactly good. Claude reads the crash logs on its own—it knows where they are—and continues on its way. I'm just sitting back and trying to do other things while Claude works, although it's a little distracting that something else is using my computer at the same time.
I really don't want to anthropomorphize these programs, but it's just so hard when it's acting so much like a person...