Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
That's tiring, and really annoying.
It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.
It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.
Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
made up bullshit
> summarization
except you can't possibly know the output has any relation whatsoever to the text being summarized
> information search and extraction
except you can't possibly know the output has any relation whatsoever to the information being extracted
people still fall for this crap?
Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.
In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.
Can we just recognize it for what it is?
I think it's fair to say this is not one of the use cases where it shines. It's not great at logic, it's also not that smart.
That's exactly what the hype does. Too big claims and then it gets dismissed when it inevitably doesn't live up to the hype.
I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.
See also: Gartner hype cycle
I agree with this, I feel like I've seen a lot of really cool technology get swept up in a hype storm and get carried away into oblivion.
I wonder what ways there are for the people who put out these innovations to shield them/their products from it?
Luckily I have a lot of faith in the OpenAI people - I hope their shielding themselves from the technological form of audience capture.
My last resort is to just remove all AI references from my marketing and just deliver the product.
I've criticized it whenever it gets brought up as an alternative for academic research, coding, math, other more sophisticated knowledge based stuff. In my experience at least, it falls apart at reliably dealing with these and I haven't gone back.
But man, is it ever revolutionary at actually dealing with language and text.
As an example, I have a bunch of boring drama going on right now with my family, endless fucking emails while I'm trying to work.
I just paste them into chat gpt and get it to summarize them, and then I get it to write a response. The onerous safeguards make it so I don't have to worry about it being a dick.
Family has texted me about how kind and diplomatic I'm being and I honestly don't even really know what they're squabbling about, it's so nice!
It will even claim it can generate citations for you too, which is pretty messed up because when I tried it just fabricated them replete with faked DOIs.
Where it shines is at squishy language stuff, like generating the framework of an email, paraphrasing a paragraph for you, or summarizing a news article.
It really is revolutionary at language tasks, but unfortunately the hype machine and these weird "AI sycophants" have caused people to dramatically overestimate it's use cases.
Good luck with the drama! Make sure to read a summary for the next family meeting haha.
Yeah I will be sure to read it before meeting them, would be awkward if they found out I was using it during one of the disputes, which was whether or not to keep resuscitating Grandma.
ChatGPTs stupid ethical filters made it so I actually had to type my response to that one all by myself.
Which has happened before. The original semantic/heuristic AI, most notably expert systems, over-promised and ultimately under-delivered. This led directly to the so-called "AI winter" which lasted more than two decades and didn't end until quite recently. It's a very real concern, especially among people who want to push the technology forward and not just profit from it.