zlacker

[parent] [thread] 17 comments
1. aidama+(OP)[view] [source] 2023-11-18 03:31:57
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

replies(6): >>SkyPun+I >>cscurm+l1 >>static+e2 >>haolez+x2 >>lossol+H3 >>morsec+P3
2. SkyPun+I[view] [source] 2023-11-18 03:36:32
>>aidama+(OP)
The only thing GPT 4 is missing is the ability to recognize it needs to ask more questions before it jumps into a problem.

When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.

replies(1): >>dekhn+la
3. cscurm+l1[view] [source] 2023-11-18 03:42:16
>>aidama+(OP)
Sorry. Robust research says no. Remember, people thought Eliza was AGI too.

https://arxiv.org/abs/2308.03762

If it was really AGI, there won't even be ambiguity and room for comments like mine.

replies(2): >>iamnot+d4 >>Camper+e6
4. static+e2[view] [source] 2023-11-18 03:49:12
>>aidama+(OP)
Fascinating. What do you make of the fact GPT 4 says you have no clue what you are talking about?
replies(1): >>postal+r3
5. haolez+x2[view] [source] 2023-11-18 03:51:43
>>aidama+(OP)
I kind of agree, but at the same time we can't be sure of what's going on behind the scenes. It seems that GPT-4 is a combination of several huge models with some logic to route the requests to the most apt models. Maybe an AGI would make more sense as a single, more cohese structure?

Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.

But regardless, it's absurdly impressive what it can do today.

◧◩
6. postal+r3[view] [source] [discussion] 2023-11-18 03:58:32
>>static+e2
How does knowing you are arguing against a GPT-4 bot?
7. lossol+H3[view] [source] 2023-11-18 04:00:56
>>aidama+(OP)
Well, if it's so smart then maybe it will learn to count finally someday.

https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...

8. morsec+P3[view] [source] 2023-11-18 04:01:27
>>aidama+(OP)
These models can't even form new memories beyond the length of their context windows. It's impressive but it is clearly not AGI.
replies(1): >>MVisse+q8
◧◩
9. iamnot+d4[view] [source] [discussion] 2023-11-18 04:05:38
>>cscurm+l1
It’s not AGI. But I’m not convinced we need a single model that can reason to make super powerful general purpose AI. If you can have a model detect where it can’t reason and pass off tasks appropriately to better methods or domain specific models you can get very powerful results. OpenAI already on the path to doing this with GPT
◧◩
10. Camper+e6[view] [source] [discussion] 2023-11-18 04:21:19
>>cscurm+l1
As if most humans would do any better on those exercises.

This thing is two years old. Be patient.

replies(2): >>cscurm+pk >>smolde+4x
◧◩
11. MVisse+q8[view] [source] [discussion] 2023-11-18 04:36:36
>>morsec+P3
Neither can you without your short-term memory system. Or your long-term memory system in your hippocampus.

People that have lost those abilities still have human level of intelligence.

replies(1): >>morsec+0d
◧◩
12. dekhn+la[view] [source] [discussion] 2023-11-18 04:48:54
>>SkyPun+I
This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.

I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.

◧◩◪
13. morsec+0d[view] [source] [discussion] 2023-11-18 05:07:30
>>MVisse+q8
Sure, people with aphasia lose the ability to form speech at all but if ChatGPT responded unintelligibly every time you wouldn't characterize it as intelligent.
◧◩◪
14. cscurm+pk[view] [source] [discussion] 2023-11-18 06:02:40
>>Camper+e6
This comparison again lol.

> As if most humans would do any better on those exercises.

Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.

This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.

> This thing is two years old. Be patient.

Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.

At various points from 1950, the gullible mass claimed AGI.

replies(1): >>Camper+Ey
◧◩◪
15. smolde+4x[view] [source] [discussion] 2023-11-18 08:05:53
>>Camper+e6
Transformer-based LLMs are almost a half-decade old at this point, and GPT-4 is the least-efficient model of it's kind ever produced (that I am aware of).

OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.

What exactly are we holding out for, at this point? A miracle?

◧◩◪◨
16. Camper+Ey[view] [source] [discussion] 2023-11-18 08:20:21
>>cscurm+pk
At various points from 1950, the gullible mass claimed AGI.

Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.

In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.

(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)

replies(1): >>cscurm+PV1
◧◩◪◨⬒
17. cscurm+PV1[view] [source] [discussion] 2023-11-18 17:44:39
>>Camper+Ey
The guy I replied to is claiming AGI:

>>38314733

"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "

replies(1): >>Camper+a22
◧◩◪◨⬒⬓
18. Camper+a22[view] [source] [discussion] 2023-11-18 18:14:39
>>cscurm+PV1
Fair enough, that seems premature. Transformers are clearly already exceeding human intelligence in some specific ways, going back to AlphaGo. It's almost as clear that related techniques are capable of approaching AGI in the 'G' (general) sense. What's needed now is refinement rather than revolution.

Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.

[go to top]