zlacker

[return to "Greg Brockman quits OpenAI"]
1. jumplo+pc[view] [source] 2023-11-18 01:09:25
>>nickru+(OP)
Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.

Sam claims LLMs aren't sufficient for AGI (rightfully so).

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.

In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.

◧◩
2. abra0+Nn[view] [source] 2023-11-18 02:20:32
>>jumplo+pc
>rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic

◧◩◪
3. jumplo+Qp[view] [source] 2023-11-18 02:41:26
>>abra0+Nn
The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!

◧◩◪◨
4. dboreh+4s[view] [source] 2023-11-18 02:59:12
>>jumplo+Qp
Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.
◧◩◪◨⬒
5. mlyle+nx[view] [source] 2023-11-18 03:35:42
>>dboreh+4s
Clearly there's a difference, because the architectures we have don't know how to persist information or further train.

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.

I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.

◧◩◪◨⬒⬓
6. visarg+e61[view] [source] 2023-11-18 08:28:09
>>mlyle+nx
But context windows got to 100K now, RAG systems are everywhere, and we can cheaply fine-tune LoRAs for a price similar with inferencing, maybe 3x more expensive per token. A memory hierarchy made of LoRA -> Context -> RAG could be "all you need".

My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.

So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.

Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.

Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.

[go to top]