made up bullshit
> summarization
except you can't possibly know the output has any relation whatsoever to the text being summarized
> information search and extraction
except you can't possibly know the output has any relation whatsoever to the information being extracted
people still fall for this crap?
Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.
In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.
Can we just recognize it for what it is?
I think it's fair to say this is not one of the use cases where it shines. It's not great at logic, it's also not that smart.
That's exactly what the hype does. Too big claims and then it gets dismissed when it inevitably doesn't live up to the hype.
It will even claim it can generate citations for you too, which is pretty messed up because when I tried it just fabricated them replete with faked DOIs.
Where it shines is at squishy language stuff, like generating the framework of an email, paraphrasing a paragraph for you, or summarizing a news article.
It really is revolutionary at language tasks, but unfortunately the hype machine and these weird "AI sycophants" have caused people to dramatically overestimate it's use cases.