zlacker

[return to "Ask HN: Is anyone else getting AI fatigue?"]
1. dual_d+ha[view] [source] 2023-02-09 12:18:31
>>grader+(OP)
The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

◧◩
2. auctor+vj[view] [source] 2023-02-09 13:13:49
>>dual_d+ha
I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

◧◩◪
3. rsynno+Co[view] [source] 2023-02-09 13:44:20
>>auctor+vj
> ChatGPT and similar models are revolutionary

For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.

◧◩◪◨
4. kriops+Rp[view] [source] 2023-02-09 13:50:55
>>rsynno+Co
If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.
◧◩◪◨⬒
5. EVa5I7+Cr[view] [source] 2023-02-09 13:58:32
>>kriops+Rp
But will it? After accounting for the time needed to fix all the bugs it introduces?
◧◩◪◨⬒⬓
6. Timwi+Pw[view] [source] 2023-02-09 14:21:07
>>EVa5I7+Cr
Humans introduce bugs too. ChatGPT is still new, so it probably makes more mistakes than a human at the moment, but it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard (and several other important regards).
◧◩◪◨⬒⬓⬔
7. rsynno+QI[view] [source] 2023-02-09 15:04:53
>>Timwi+Pw
> it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard

This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.

That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.

As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.

(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)

◧◩◪◨⬒⬓⬔⧯
8. EVa5I7+4v2[view] [source] 2023-02-09 21:33:33
>>rsynno+QI
Anyone still remembers the self-driving hype?
[go to top]