They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.
The thing is, it's probably a good thing. The "Open" part of OpenAI was always an almost suicidal bad mistake. The point of starting the company was to try and reduce p(doom) of creating an AGI and it's almost certain that the more people have access to powerful and potentially dangerous tech that p(doom) increases. I think OpenAI is one of the most dangerous organizations on the planet and being more closed reduces that danger slightly.
https://openai.com/blog/planning-for-agi-and-beyond/
Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.
Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.
I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...