so what they don't "understand", by your very specific definition of the word "understanding"? the person you're replying to is talking about the fact that they can say something to their computer in the form of casual human language and it will produce a useful response, where previously that was not true. whether that fits your suspiciously specific definition of "understanding" does not matter a bit.
so what they are over-confident with areas outside of their training data? provide more training data, improve the models, reduce the hallucination. it isn't an issue with the concept, it's an issue with the execution. yes you'll never be able to reduce it to 0%, but so what? humans hallucinate too. what are we aiming for? omniscience?