zlacker

[return to "Ask HN: Is anyone else getting AI fatigue?"]
1. dual_d+ha[view] [source] 2023-02-09 12:18:31
>>grader+(OP)
The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

◧◩
2. auctor+vj[view] [source] 2023-02-09 13:13:49
>>dual_d+ha
I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

◧◩◪
3. liveon+hl[view] [source] 2023-02-09 13:23:56
>>auctor+vj
So, to you, ChatGPT is approaching AGI?
◧◩◪◨
4. bioeme+pm[view] [source] 2023-02-09 13:30:26
>>liveon+hl
I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Think about it.

What's the most expressive medium we have which is also absolutely inundated with data?

To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

◧◩◪◨⬒
5. moron4+js[view] [source] 2023-02-09 14:01:08
>>bioeme+pm
> I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.

◧◩◪◨⬒⬓
6. bioeme+qI[view] [source] 2023-02-09 15:03:44
>>moron4+js
We don't judge AI by their ability to produce language, we judge them by their conference and ability to respond intelligently, to give us information we can use.
[go to top]