Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
That's tiring, and really annoying.
It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.
It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.
Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
made up bullshit
> summarization
except you can't possibly know the output has any relation whatsoever to the text being summarized
> information search and extraction
except you can't possibly know the output has any relation whatsoever to the information being extracted
people still fall for this crap?
Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.
In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.
Can we just recognize it for what it is?