Customer service has to be different levels of help tools. And current AI tools must be tested first in order for us to be able to improve them.
You have limited resources for Customer Support, so it's good to have filtering systems in terms of Docs, Forms, Search, GPT in front of the actual Customer Support.
To many questions a person will find an answer much faster from the documentation/manual itself than calling support. To many other types of questions it's possible LLM will be able to respond much more quickly and efficiently.
It's just a matter of providing this optimal pathway.
You don't have to think of Customer Support LLM as the same thing as a final Sales Agent.
You can think of it as a tool, that should have specialized information fed into it using embeddings or training and will be able to spend infinite time with you, to answer any stupid questions that you might have. I find I have much better experience with Chatbots, as I can drill deep into the "why's" which might otherwise annoy a real person.
So a ChatBot that can't intentionally lie or hide things could actually be an improvement in such cases.
Replying here as the thread won't allow for more. But I'm not following what you are meaning then.
I'm not seeing the outcome from Chevy being poor, any more than "inspect element" would be poor.
You are right - it does seem to allow. But I'm not sure what you exactly mean after 20 minutes as well.
>You just add a disclaimer that none of what the bot says is legally binding
The combination of legality and AI can make for a complex and nuanced problem. A superficial solution like "just add a disclaimer" probably doesn't not capture the nuance to make for a great outcome. I.e., a superficial understanding leads us to oversimplify our solutions. Just like with the responses, it seems like you are in more of a hurry to send a retort than to understand the point.
Why can't it just be a tool for assistance that is not legally binding?
Also throughout this year I have thought about those problems, and to me it's always been weird how people have so much problems with "hallucinations". And I've thought about exact similar ChatBot as Chevy used and how awesome it would be to be able to use something like that myself to find products.
To me the expectations of this having to be legally binding, etc just seem misguided.
AI tools increase my productivity so much, and also people often make up things, lie, but it's even more difficult to tell when they do that, as everyone's different and everyone lies differently.
I think you're getting my point confused with a tangentially related one. Your point may be "chatbots shouldn't be legally binding" and I would tend to agree. But my point was that simply throwing a disclaimer on it may not be the best way to get there.
Consider if poison control uses a chatbot to answer phone calls and give advice. They can't waive their responsibility by just throwing a disclaimer on it. It doesn't meet the current strict liability standards regarding what kind of duty is required. There is such a thing in law as "duty creep," and there may be a liability if a jury finds it a reasonable expectation that a chatbot provides accurate answers. To my point, the duty is going to be largely context-dependent, and that means broad-brushed superficial "solutions" probably aren't sufficient.