zlacker

[return to "Greg Brockman quits OpenAI"]
1. jumplo+pc[view] [source] 2023-11-18 01:09:25
>>nickru+(OP)
Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.

Sam claims LLMs aren't sufficient for AGI (rightfully so).

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.

In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.

◧◩
2. cedws+XM[view] [source] 2023-11-18 05:30:42
>>jumplo+pc
>Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.

[0] https://arxiv.org/abs/2309.12288

◧◩◪
3. Medium+uV[view] [source] 2023-11-18 06:47:43
>>cedws+XM
> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.

This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.

[go to top]