All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.
OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.
Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"
To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.
Link? Is the ~2 year timeline a common estimate in the field?
> I don't think it's going to happen next year, it's still useful to have the conversation and maybe it's like two or three years instead.
This doesn't seem like a super definite prediction. The "two or three" might have just been a hypothetical.