To understand how LLM's work and why the hallucination is inherent part of the tech with "AI". Maybe the core problem is implementation practices which remove critical thinking and testing? Maybe the core problem is 'fake it till you make it' ideology? I don't know. But I am sure about one thing. This as any other postmodern technology will bring more problems than solutions.