* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded
* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one
* Reckless AGI development should be not be allowed
It's basically the Manhattan project argument (either we build the nuke or the Nazis will).
I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.
Many people on HN seem to disagree with the premise: they believe that AI is not dangerous now and also won’t be in the future. Or, still believe that AGI is a lifetime or more away.