You can read the Superalignment announcement and what it focuses on. The entire thing is about AGI x-risk, with a small paragraph about how there's other people's work about whatever bias and PC-ness.
These are different concerns by different people. You and many others are pattern matching AGI x-risk to the AI bias people to your detriment and it's poisoning the discourse. Listen to Emmett Shear (former OpenAI/Twitch CEOs) explain this in depth: https://www.youtube.com/watch?v=jZ2xw_1_KHY&t=800s
yes, I have no doubt that some researchers, influenced by juvenile fantasies omnipresent in all media from the past half century, might actually genuinely belong to the safety cult. I just refuse to believe that people whose opinions and decisions actually matter are influenced by such fears, because unlike those few genuine cultists, the people in charge aren't fucking morons who think that glorified autocomplete pseudo-AI tools can escape into the matrix and start sending terminators into the past to destroy our democracy.
believing in selflessness or social responsibility of corporations and politicians is incomprehensibly naive (to put it as safely and ethically as I possibly can)
Well, at least I'm glad you admit it's due to your stubbornness and unwillingness to change beliefs when confronted with evidence.
Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity"), Ilya Sutskever, Geoffrey Hinton, Yoshua Bengio, Jan Leike, Paul Christiano (creator of RLHF), Dario Amodei (Anthropic), Demis Hassabis (Google DeepMind) all believe AGI poses an existential risk to humanity.
>But over the past few years, safety culture and processes have taken a backseat to shiny products.
you know what else happened over the past few years? openai started to make money. so while sama was making soundbites for headlines about the existential threat of AI, internally, all the useful idiots were already told to shut the fuck up.