No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.
I think it’s also fair Sam starts something new with a for profit focus of the get-go.
Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.
How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.
It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.
What is AGI if not problem solving in novel domains?
How do you compare Eliza to GPT4?
That is a definition. It is not a generally accepted definition.
1. Are they useful?
2. Are they going to become more useful in the forseeable future?
On 1, I would say, maybe? Like, somewhere between Microsoft Word and Excel? On 2, I would say, sure - an 'AGI' would be tremendously useful. But it's also tremendously unlikely to grow somehow out of the current state of the art. People disagree on that point, but I don't think there are even compelling reasons to believe that LLMs can evolve beyond their current status as bullshit generators.