zlacker

[return to "Ask HN: Is anyone else getting AI fatigue?"]
1. dual_d+ha[view] [source] 2023-02-09 12:18:31
>>grader+(OP)
The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

◧◩
2. auctor+vj[view] [source] 2023-02-09 13:13:49
>>dual_d+ha
I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

◧◩◪
3. liveon+hl[view] [source] 2023-02-09 13:23:56
>>auctor+vj
So, to you, ChatGPT is approaching AGI?
◧◩◪◨
4. bioeme+pm[view] [source] 2023-02-09 13:30:26
>>liveon+hl
I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Think about it.

What's the most expressive medium we have which is also absolutely inundated with data?

To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

◧◩◪◨⬒
5. hhjink+1v[view] [source] 2023-02-09 14:13:55
>>bioeme+pm
> To broadly be able to predict human speech you need to broadly be able to predict the human mind

This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

◧◩◪◨⬒⬓
6. bioeme+vH[view] [source] 2023-02-09 14:59:41
>>hhjink+1v
Exactly. Since language is a compressed and transmittable result of our thought, to predict text as accurately as possible requires you do the same. A model with understanding of the human mind will outperform one without.

> Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?

[go to top]