zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. kamika+t51[view] [source] 2024-05-15 14:30:33
>>Jimmc4+(OP)
who is this and why is it important? [1]

super-alignment co-lead with Ilya (who resigned yesterday)

what is super alignment? [2]

> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.

[1] https://jan.leike.name/ [2] https://openai.com/superalignment/

◧◩
2. jvande+n71[view] [source] 2024-05-15 14:40:16
>>kamika+t51
My honest-to-god guess is that it just seemed like a needless cost center in a growing business, so there was pressure against them doing the work they wanted to do.

I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.

◧◩◪
3. mjr00+Ea1[view] [source] 2024-05-15 14:55:12
>>jvande+n71
Yeah, OpenAI is all-in on the LLM golden goose and is much more focused on how to monetize it via embedding advertisements, continuing to provide "safety" via topic restrictions, etc., than going further down the AGI route.

There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.

◧◩◪◨
4. stewar+Cd1[view] [source] 2024-05-15 15:08:38
>>mjr00+Ea1
How can you be so certain there is 0 chance LLMs lead to AGI/Superintelligence? Asking curiously, not something I've heard prior.
◧◩◪◨⬒
5. barlin+Vi1[view] [source] 2024-05-15 15:33:00
>>stewar+Cd1
LLMs are gigantic curves fitted to civilizational scale datasets. LLM predictions are based on this. A language model is a mathematical construct and can only be as intelligent as that Algebra book sitting on your shelf.
◧◩◪◨⬒⬓
6. dwaltr+jH1[view] [source] 2024-05-15 17:22:11
>>barlin+Vi1
An algebra book is a collection of paper pages with ink on them. An LLM is... nothing like that at all. LLMs are complex machines that operate on data and produce data. Books are completely static. They don't do anything.

Do you have a better analogy? I'd like to hear more about how ML models can't be intelligent, if you don't mind.

I'm pretty skeptical of the idea that we know enough at this point to make that claim definitively.

◧◩◪◨⬒⬓⬔
7. andsoi+sA2[view] [source] 2024-05-15 22:34:34
>>dwaltr+jH1
> Books are completely static. They don't do anything.

Books (and writing) are a big force in cultural evolution.

◧◩◪◨⬒⬓⬔⧯
8. dwaltr+yK2[view] [source] 2024-05-16 00:09:34
>>andsoi+sA2
Yes, I love books. They are awesome. But we are talking about machine intelligence, so that's not super relevant.

Books aren't data/info-processing machines, by themselves. LLMs are.

[go to top]