zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. llamai+lg[view] [source] 2024-05-17 17:44:43
>>jnnnth+(OP)
This won't make a dent in the logical armor of AI optimists:

[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI

[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking

Unbridled optimism lives another day!

◧◩
2. Aperoc+hh[view] [source] 2024-05-17 17:51:42
>>llamai+lg
That's an over complication, how about my naive belief that LLMs (and increasing the size of) don't lead to AGI.

Not saying AGI is impossible, just think the large models and the underlying statistic model beneath are not the path.

◧◩◪
3. jonono+WB1[view] [source] 2024-05-18 09:27:57
>>Aperoc+hh
Many teams are trying to combine their ideas with LLM. Because despite their weaknesses, it seems LLMs (and related concepts such as RLFH, transformers, self-supervised learning, internet-scale datasets), have made some remarkable gains. Those team are coming from the whole spectrum of ML and AI research. And they wish to use their ideas to overcome some of the weaknesses of current day LLMs. Do you also think that none of these children can lead to AGI? Why not?
[go to top]