zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. kamika+t51[view] [source] 2024-05-15 14:30:33
>>Jimmc4+(OP)
who is this and why is it important? [1]

super-alignment co-lead with Ilya (who resigned yesterday)

what is super alignment? [2]

> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.

[1] https://jan.leike.name/ [2] https://openai.com/superalignment/

◧◩
2. jvande+n71[view] [source] 2024-05-15 14:40:16
>>kamika+t51
My honest-to-god guess is that it just seemed like a needless cost center in a growing business, so there was pressure against them doing the work they wanted to do.

I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.

◧◩◪
3. mjr00+Ea1[view] [source] 2024-05-15 14:55:12
>>jvande+n71
Yeah, OpenAI is all-in on the LLM golden goose and is much more focused on how to monetize it via embedding advertisements, continuing to provide "safety" via topic restrictions, etc., than going further down the AGI route.

There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.

◧◩◪◨
4. stewar+Cd1[view] [source] 2024-05-15 15:08:38
>>mjr00+Ea1
How can you be so certain there is 0 chance LLMs lead to AGI/Superintelligence? Asking curiously, not something I've heard prior.
◧◩◪◨⬒
5. guitar+4i1[view] [source] 2024-05-15 15:29:18
>>stewar+Cd1
Pure LLM based approach will not lead to AGI, I'm 100% sure. A new research paper has shown [0] that no matter what LLM model is used, it exhibits diminishing returns, when you would be wanting at least a linear curve when looking for AGI.

[0] https://www.youtube.com/watch?v=dDUC-LqVrPU

◧◩◪◨⬒⬓
6. sebzim+Pk1[view] [source] 2024-05-15 15:41:58
>>guitar+4i1
Based on the abstract this is about image models not LLMs
[go to top]