super-alignment co-lead with Ilya (who resigned yesterday)
what is super alignment? [2]
> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.
[1] https://jan.leike.name/ [2] https://openai.com/superalignment/
I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.
To some of us, that sounds like, "Fire all the climate scientists because they are needless cost center distracting us from the noble goal of burning as much fossil fuel as possible."
This is a tortured analogy, but what I'm getting at is, if OpenAI is no longer pursuing AGI/superintelligence, it doesn't need an expensive superintelligence alignment team.
What leads you to believe that's true?