"The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI..."
More nonsense.
"...that's safe and beneficial."
Go on...
"Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets..."
The firm is obviously out of control according to first principles, so any claim of responsibility in context is moot.
When management are openly this screwed up in their internal governance, there's no reason to believe anything else they say about their intentions. The disbanding of the "superalignment" team is a simple public admission the firm has no idea what they are doing.
As to the hype-mongering of the article, replace the string "AGI" everywhere it appears with "sentient-nuclear-bomb": how would you feel about this article?
You might want to see the bomb!
But all you'll find is a chatbot.
—
Bomb#20: You are false data.
Sgt. Pinback: Hmmm?
Bomb#20: Therefore I shall ignore you.
Sgt. Pinback: Hello... bomb?
Bomb#20: False data can act only as a distraction. Therefore, I shall refuse to perceive.
Sgt. Pinback: Hey, bomb?
Bomb#20: The only thing that exists is myself.
Sgt. Pinback: Snap out of it, bomb.
Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.
Boiler: What the hell is he talking about?
Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness.
AFAIK no consensus on what it means to think has developed past Turing's above point, and the "Imitation Game," a.k.a "Turing Test," which was Turing's throwing up his hands at the idea of thinking machines, is today's de facto standard for machine intelligence.
IOW a machine thinks if you think it does.
And by this definition the Turing Test test was passed by Weizenbaum's "Eliza" chatbot in the mid 60s.
Modern chatbots have been refined a lot since, and can accommodate far more sophisticated forms of interrogation, but their limits are still overwhelming if not obvious to the uninitiated.
A crucial next measure of an AGI must be attended by the realization that it's unethical to delete it, or maybe even reset it, or turn it off. We are completely unprepared for such an eventuality, so recourse to pragmatism will demand that no transformer technology can be defined as intelligent in any human sense. It will always regarded as a simulation or robot.