* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded
* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one
* Reckless AGI development should be not be allowed
It's basically the Manhattan project argument (either we build the nuke or the Nazis will).
I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.
I honestly haven't made up my mind about AGI or whether LLMs are sufficiently AGI. If governments were pondering an outright worldwide ban on the research/development, I don't know how I would actually feel about that. But I can't even imagine our governments pondering something so idealistic and even-handed.
I do know that LLMs represent a drastic advancement for many tasks, and that "Open" AI setting the tone with the Software-Augmented-with-Arbitrary-Surveillance (SaaS) "distribution" model is a continuation of this terrible trend of corporate centralization. The VC cohort is blind to this terrible dynamic because they're at the helms of the centralizing corporations - while most everyone else exists as the feedstock.
This lobbying is effectively just a shameless attempt at regulatory capture to make it so that any benefits of the new technology would be gatekept by centralized corporations - essentially the worst possible outcome, where even beneficial results of AGI/LLMs would be transformed into detrimental effects for individualist humanity.