zlacker

[return to "Ask HN: Is anyone else getting AI fatigue?"]
1. dual_d+ha[view] [source] 2023-02-09 12:18:31
>>grader+(OP)
The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

◧◩
2. rcme+5e[view] [source] 2023-02-09 12:40:55
>>dual_d+ha
As much as I’m sick of AI products, I’m even more sick of the “ChatGPT is bullshit” argument.
◧◩◪
3. jug+Gf[view] [source] 2023-02-09 12:51:24
>>rcme+5e
I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.

But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.

◧◩◪◨
4. LastTr+Ml[view] [source] 2023-02-09 13:26:47
>>jug+Gf
How are you using it at work?
◧◩◪◨⬒
5. matwoo+eq[view] [source] 2023-02-09 13:52:12
>>LastTr+Ml
I've used it to proof emails for grammar, and it's done ok.

I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.

I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.

I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.

[go to top]