I guess why is there an expectation that GPT must be not trickable by bad actors to produce whatever content.
What matters is that it would give good content to honest customers.
For the same reasons forging a contract is different from getting an idiot to sign one.
Customer service has to be different levels of help tools. And current AI tools must be tested first in order for us to be able to improve them.
You have limited resources for Customer Support, so it's good to have filtering systems in terms of Docs, Forms, Search, GPT in front of the actual Customer Support.
To many questions a person will find an answer much faster from the documentation/manual itself than calling support. To many other types of questions it's possible LLM will be able to respond much more quickly and efficiently.
It's just a matter of providing this optimal pathway.
You don't have to think of Customer Support LLM as the same thing as a final Sales Agent.
You can think of it as a tool, that should have specialized information fed into it using embeddings or training and will be able to spend infinite time with you, to answer any stupid questions that you might have. I find I have much better experience with Chatbots, as I can drill deep into the "why's" which might otherwise annoy a real person.