zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. kamika+t51[view] [source] 2024-05-15 14:30:33
>>Jimmc4+(OP)
who is this and why is it important? [1]

super-alignment co-lead with Ilya (who resigned yesterday)

what is super alignment? [2]

> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.

[1] https://jan.leike.name/ [2] https://openai.com/superalignment/

◧◩
2. jvande+n71[view] [source] 2024-05-15 14:40:16
>>kamika+t51
My honest-to-god guess is that it just seemed like a needless cost center in a growing business, so there was pressure against them doing the work they wanted to do.

I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.

◧◩◪
3. mjr00+Ea1[view] [source] 2024-05-15 14:55:12
>>jvande+n71
Yeah, OpenAI is all-in on the LLM golden goose and is much more focused on how to monetize it via embedding advertisements, continuing to provide "safety" via topic restrictions, etc., than going further down the AGI route.

There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.

◧◩◪◨
4. stewar+Cd1[view] [source] 2024-05-15 15:08:38
>>mjr00+Ea1
How can you be so certain there is 0 chance LLMs lead to AGI/Superintelligence? Asking curiously, not something I've heard prior.
◧◩◪◨⬒
5. guhida+0h1[view] [source] 2024-05-15 15:23:57
>>stewar+Cd1
I'm 100% certain that I need to do more than just predict the next token to be considered intelligent. Also call me when ChatGPT can manipulate matter.
◧◩◪◨⬒⬓
6. mypalm+v62[view] [source] 2024-05-15 19:36:33
>>guhida+0h1
Are you 100% certain that the human brain performs no language processing which is analogous to token prediction?
◧◩◪◨⬒⬓⬔
7. stubis+5N2[view] [source] 2024-05-16 00:34:33
>>mypalm+v62
A human brain certainly does do predictions, which is very useful to the bit that makes decisions. But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize? The best it can do is blindly follow the mob, a behavior we consider unintelligent even when done by human brains.
◧◩◪◨⬒⬓⬔⧯
8. craken+L63[view] [source] 2024-05-16 04:37:07
>>stubis+5N2
> But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize?

My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.

I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...

◧◩◪◨⬒⬓⬔⧯▣
9. soulof+wC3[view] [source] 2024-05-16 11:46:35
>>craken+L63
I'm of the belief that the entire conscious experience is a side effect of the need for us to make rapid predictions when time is of the essence, such as when hunting or fleeing. Otherwise, our subconscious could probably handle most of the work just fine.
[go to top]