Well, at least I'm glad you admit it's due to your stubbornness and unwillingness to change beliefs when confronted with evidence.
Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity"), Ilya Sutskever, Geoffrey Hinton, Yoshua Bengio, Jan Leike, Paul Christiano (creator of RLHF), Dario Amodei (Anthropic), Demis Hassabis (Google DeepMind) all believe AGI poses an existential risk to humanity.
>But over the past few years, safety culture and processes have taken a backseat to shiny products.
you know what else happened over the past few years? openai started to make money. so while sama was making soundbites for headlines about the existential threat of AI, internally, all the useful idiots were already told to shut the fuck up.