I can understand having an LLM trained on previous inquiries made via email, chat or transcribed phone calls, but a general LLM like ChatGPT, how is that going to be able to answer customers questions? The information ChatGPT has, specific to Chevrolet of Watsonville can't be anymore than what is already publicly available, so if customers can't find it, then maybe design a better website?
In this particular case they screwed up the implementation.
Every actual application of an LLM in prod that I’ve seen has only been this. A better self service or support chatbot. So far, not exactly the “revolution” being advertised.
"What is the gas mileage of the Chevy Colorado?"
"What electric vehicles are in your lineup?"
"What is the difference between the Sport and Performance models of the Equinox?"
Feed the LLM the latest spec sheet as context and give it a few instructions ("act as a Chevy sales rep", "only recommend Chevy brand vehicles", "be very biased in favor of Chevy...") it can easily answer the majority of general inquiries from customers, probably more intelligently than most dealers or salespeople.
Besides, what makes you think that it’s ineffective? Any reason to believe that the chat bot was bad in fulfilling legitimate user requests? FYI, someone making it act outside of its intended purpose affects only that person’s experience.
It’s a DAN attack, people are having lots of fun with this type of prompt engineering.
It’s just some fun in the expense of the company paying for the API. The kind of fun that kids in the early days of the web were having by hacking websites to make it say something funny - just less harmful because no one else sees it.
“OMG you guys, we can save so much money! I can’t wait to fire a bunch of people! Quick, drop everything and (run an expensive experiment with this | retool our entire data org for it(!) | throw a cartoon bag of cash at some shady company promising us anything we ask for)! OMG, I’m so excited for this I think I’ll just start the layoffs now, because how can it fail?”
- - - - -
The above is happening all over the place right now, and has been for some months. I’m paraphrasing for effect and conciseness, but not being unfair. I’ve seen a couple of these up-close already, and I’m not even trying to find them, nor in segments of the industry most likely to encounter them.
It’d be very funny if it weren’t screwing up a bunch of folks’ lives.
[edit] oh and for bigger orgs there’s a real “we can’t be left behind!” fear driving it. For VC ones, they’re desperate to put “AI” in their decks for further rounds or acquisition talks. It’s wild, and very little of it has anything to do with producing real value. It’s often harming productivity. It’s all some very Dr Strangelove sort of stuff.