* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded
* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one
* Reckless AGI development should be not be allowed
It's basically the Manhattan project argument (either we build the nuke or the Nazis will).
I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.
Many people on HN seem to disagree with the premise: they believe that AI is not dangerous now and also won’t be in the future. Or, still believe that AGI is a lifetime or more away.
* Is there a plausible path to safe AGI regardless of who's executing on it?
* Why do we believe OpenAI is the best equipped to get us there?
Manhattan project is an interesting analogy. But if that's the thinking, shouldn't the government spearhead the project instead of a private entity (so they are, theoretically at least, accountable to the electorate at large rather than just their investors)?
I honestly haven't made up my mind about AGI or whether LLMs are sufficiently AGI. If governments were pondering an outright worldwide ban on the research/development, I don't know how I would actually feel about that. But I can't even imagine our governments pondering something so idealistic and even-handed.
I do know that LLMs represent a drastic advancement for many tasks, and that "Open" AI setting the tone with the Software-Augmented-with-Arbitrary-Surveillance (SaaS) "distribution" model is a continuation of this terrible trend of corporate centralization. The VC cohort is blind to this terrible dynamic because they're at the helms of the centralizing corporations - while most everyone else exists as the feedstock.
This lobbying is effectively just a shameless attempt at regulatory capture to make it so that any benefits of the new technology would be gatekept by centralized corporations - essentially the worst possible outcome, where even beneficial results of AGI/LLMs would be transformed into detrimental effects for individualist humanity.
I don't think anyone knows that for sure but the alignment efforts at OpenAI are certainly better than nothing. If you read the GPT-4 technical report the raw model is capable of some really nasty stuff, and that's certainly what we can expect from the kind of models people will be able to run at home in the coming years without any oversight.