zlacker

[parent] [thread] 0 comments
1. dekhn+(OP)[view] [source] 2023-11-18 04:48:54
This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.

I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.

[go to top]