>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
>>erhaet+1O1
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.
Now we are just reliant on ‘I’ll know it when I see it’.
LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.
>>Closi+VP1
LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.