All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users
One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies
To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough
Now with product leading the way, its really interesting to see where these engineers head
"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."
This becomes pretty clear when you get to more complex algorithms or low level details like drawing a stack frame. There is not logic there.
I can ask ChatGPT questions that require logic to answer, and it will do just fine in most cases. It has certain limitations, but to say it isn't able to apply logic is just completely contrary to my experience with ChatGPT.
ChatGPT answers:
> Yes, if we assume the statement "all snakes have legs" to be true and accept that a python is a type of snake, then logically, a python would have legs. This conclusion follows from the structure of a logical syllogism:
> 1. All snakes have legs.
> 2. A python is a snake.
> 3. Therefore, a python has legs.
> However, it’s important to note that in reality, snakes, including pythons, do not have legs. This logical exercise is based on the hypothetical premise that all snakes have legs.
ChatGPT clearly understands the logic of the question, answers correctly, and then tells me that the premise of my question is incorrect.
You can say, "But it doesn't really understand logic. It's just predicting the most likely token." Well, it responds exactly how someone who understands logic would respond. If you assert that that's not the same as applying logic, then I think you're essentially making a religious statement.