super-alignment co-lead with Ilya (who resigned yesterday)
what is super alignment? [2]
> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.
[1] https://jan.leike.name/ [2] https://openai.com/superalignment/
I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.
There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.
I'm tuning my probabilities back to 99%, I still don't believe just feeding more data to the LLM will do it. But I'll give the chance a possibility.
Next steps would be in totally different fields, like implementing actual reasoning, global outline planning and the capacity to evolve after training is done.