>This roughly matches my experience too, but I don't think it applies to this one.
I'm not so sure. The argument that any good programming language would inherently eliminate the concern for hallucinations seems like a pretty weak argument to me.
To be honest I’m not sure where the logic for that claim comes from. Maybe an abundance of documentation is the assumption?
Either way, being dismissive of one of LLMs major flaws and blaming it on the language doesn’t seem like the way to make that argument.
It seems obviously true to me: code hallucinations are where the LLM outputs code with incorrect details - syntax errors, incorrect class methods, invalid imports etc.
If you have a strong linter in a loop those mistakes can be automatically detected and passed back into the LLM to get fixed.
Surely that's a solution to hallucinations?
It won't catch other types of logic error, but I would classify those as bugs, not hallucinations.
Let's go a step further, the LLM can produce bug free code too if we just call the bugs "glitches".
You are making a purely arbitrary decision on how to classify an LLM's mistakes based on how easy it is to catch them, regardless of their severity or cause. But simply categorizing the mistakes in a different bucket doesn't make them any less of a problem.