super-alignment co-lead with Ilya (who resigned yesterday)
what is super alignment? [2]
> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.
[1] https://jan.leike.name/ [2] https://openai.com/superalignment/
I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.
There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.
Do you have a better analogy? I'd like to hear more about how ML models can't be intelligent, if you don't mind.
I'm pretty skeptical of the idea that we know enough at this point to make that claim definitively.
Books (and writing) are a big force in cultural evolution.