zlacker

[return to "Greg Brockman quits OpenAI"]
1. jumplo+pc[view] [source] 2023-11-18 01:09:25
>>nickru+(OP)
Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.

Sam claims LLMs aren't sufficient for AGI (rightfully so).

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.

In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.

◧◩
2. abra0+Nn[view] [source] 2023-11-18 02:20:32
>>jumplo+pc
>rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic

◧◩◪
3. jumplo+Qp[view] [source] 2023-11-18 02:41:26
>>abra0+Nn
The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!

◧◩◪◨
4. dboreh+4s[view] [source] 2023-11-18 02:59:12
>>jumplo+Qp
Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.
◧◩◪◨⬒
5. mlyle+nx[view] [source] 2023-11-18 03:35:42
>>dboreh+4s
Clearly there's a difference, because the architectures we have don't know how to persist information or further train.

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.

I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.

◧◩◪◨⬒⬓
6. darker+Jz[view] [source] 2023-11-18 03:55:32
>>mlyle+nx
> Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.

◧◩◪◨⬒⬓⬔
7. mlyle+rA[view] [source] 2023-11-18 04:00:56
>>darker+Jz
I can remember to do something tomorrow after doing many things in-between.

I can reason about something and then combine it with something I reasoned about at a different time.

I can learn new tasks.

I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.

The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."

◧◩◪◨⬒⬓⬔⧯
8. blacko+aI[view] [source] 2023-11-18 04:55:38
>>mlyle+rA
That just proves we real-time fine tuning of the neuron weights. It is computationally intensive but not fundamentally different. A million token context would look close to long short-term memory and frequent fine-tuning will be akin to long-term memory.

I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.

◧◩◪◨⬒⬓⬔⧯▣
9. mlyle+UL[view] [source] 2023-11-18 05:23:13
>>blacko+aI
Real-time fine tuning would be one approach that probably helps with some things (improving performance at a task based on feedback) but is probably not well suited for others (remembering analogous situations, setting goals; it's not really clear how one fine-tunes a context window into persistence in an LLM). There's also the concern that right now we seem to need many, many more examples in training data than humans get for the machine to get passably good at similar tasks.

I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.

I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.

[go to top]