* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded
* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one
* Reckless AGI development should be not be allowed
It's basically the Manhattan project argument (either we build the nuke or the Nazis will).
I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.
* Is there a plausible path to safe AGI regardless of who's executing on it?
* Why do we believe OpenAI is the best equipped to get us there?
Manhattan project is an interesting analogy. But if that's the thinking, shouldn't the government spearhead the project instead of a private entity (so they are, theoretically at least, accountable to the electorate at large rather than just their investors)?