zlacker

[parent] [thread] 2 comments
1. Camper+(OP)[view] [source] 2023-11-18 08:20:21
At various points from 1950, the gullible mass claimed AGI.

Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.

In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.

(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)

replies(1): >>cscurm+bn1
2. cscurm+bn1[view] [source] 2023-11-18 17:44:39
>>Camper+(OP)
The guy I replied to is claiming AGI:

>>38314733

"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "

replies(1): >>Camper+wt1
◧◩
3. Camper+wt1[view] [source] [discussion] 2023-11-18 18:14:39
>>cscurm+bn1
Fair enough, that seems premature. Transformers are clearly already exceeding human intelligence in some specific ways, going back to AlphaGo. It's almost as clear that related techniques are capable of approaching AGI in the 'G' (general) sense. What's needed now is refinement rather than revolution.

Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.

[go to top]