Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.
Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.
Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!
Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.
Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.
I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.
I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.
I can reason about something and then combine it with something I reasoned about at a different time.
I can learn new tasks.
I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.
The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."
I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.
I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.
I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.
Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.
But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.
And working towards long-term goals is something we actually take decades to learn. And many fail there as well.
I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.
I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.
Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).
It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).
Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.
Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.
I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.
My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.
So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.
Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.
Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.
The only think flawed here is this statement. Are you even familiar with the premise of Turing test?
I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.
This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.
I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.
I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)
Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?
You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.
It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.
I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.
But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.
When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.
Most of that is encoded into weights during training, though external function call interfaces and RAG are broadening this.