I can understand having an LLM trained on previous inquiries made via email, chat or transcribed phone calls, but a general LLM like ChatGPT, how is that going to be able to answer customers questions? The information ChatGPT has, specific to Chevrolet of Watsonville can't be anymore than what is already publicly available, so if customers can't find it, then maybe design a better website?
“OMG you guys, we can save so much money! I can’t wait to fire a bunch of people! Quick, drop everything and (run an expensive experiment with this | retool our entire data org for it(!) | throw a cartoon bag of cash at some shady company promising us anything we ask for)! OMG, I’m so excited for this I think I’ll just start the layoffs now, because how can it fail?”
- - - - -
The above is happening all over the place right now, and has been for some months. I’m paraphrasing for effect and conciseness, but not being unfair. I’ve seen a couple of these up-close already, and I’m not even trying to find them, nor in segments of the industry most likely to encounter them.
It’d be very funny if it weren’t screwing up a bunch of folks’ lives.
[edit] oh and for bigger orgs there’s a real “we can’t be left behind!” fear driving it. For VC ones, they’re desperate to put “AI” in their decks for further rounds or acquisition talks. It’s wild, and very little of it has anything to do with producing real value. It’s often harming productivity. It’s all some very Dr Strangelove sort of stuff.