https://arxiv.org/abs/2308.03762
If it was really AGI, there won't even be ambiguity and room for comments like mine.
This thing is two years old. Be patient.
> As if most humans would do any better on those exercises.
Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.
This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.
> This thing is two years old. Be patient.
Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.
At various points from 1950, the gullible mass claimed AGI.
OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.
What exactly are we holding out for, at this point? A miracle?
Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.
In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.
(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)
"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "
Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.