It’s like half of the arguments are designed as engagement bait with logical consistency being a distant concern:
> If hallucination matters to you, your programming language has let you down.
This doesn’t even make sense. LLMs hallucinate things beyond simple programming language constructs. I commonly deal with allusions to functions or library methods that would be great if they existed, but the LLM made it up on the spot.
The thing is, the author clearly must know this. Anyone who uses LLMs knows this. So why put such a bizarre claim in the article other than as engagement bait to make readers angry?
There are numerous other bizarre claims throughout the article, like waving away the IP rights argument because some programmers pirate TV shows? It’s all so bizarre.
I guess I shouldn’t be surprised to scroll to the bottom and see that the author is a HN comment section veteran, because this entire article feels like it started as a reasonable discussion point and then got twisted into Hacker News engagement bait for the company blog. And it’s working well, judging by the engagement counts.
I think the author's point is your language (and more generally the tooling around it) should make this obvious, and almost all the AI agents these days will minimally run linting tools and clean up lints (which would include methods and library imports that don't exist) if they don't actively attempt to compile and test the code they've written. So you as the end user should (almost) never be seeing these made up functions.