For example, I think "chain of thought" is a good name for what it denotes. It makes the concept easy to understand and discuss, and a non-antropomorphized name would be unnatural and unnecessarily complicate things. This doesn't mean that I support companies insisting that LLMs think just like humans or anything like that.
By the way, I would say actually anti-anthropomorphism has been a bigger problem for understanding LLMs than anthropomorphism itself. The main proponents of anti-anthropomorphism (e.g. Bender and the rest of "stochastic parrot" and related paper authors) came up with a lot of predictions about things that LLMs surely couldn't do (on account of just being predictors of the next word, etc.) which turned out to be spectacularly wrong.
Tbh I also think your comparison that puts "UI events -> Bits -> Transistor Voltages" as analogy to "AI thinks -> token de-/encoding + MatMul" is certainly a stretch, as the part about "Bits -> Transistor Voltages" applies to both hierarchies as the foundational layer.
"chain of thought" could probably be called "progressive on-track-inference" and nobody would roll an eye.