Which of course they are going to try to brush it all away. Better than admitting that this problem very much still exists and isn’t going away anytime soon.
But in reality hallucinations either make people using AI lose a lot of their time trying to stuck the LLMs from dead ends or render those tools unusable.
We have legal and social mechanisms in place for the way humans are incorrect. LLMs are incorrect in new ways that our legal and social systems are less prepared to handle.
If a support human lies about a change to policy, the human is fired and management communicates about the rogue actor, the unchanged policy, and how the issue has been handled.
How do you address an AI doing the same thing without removing the AI from your support system?