Indeed, I think trying to do it that way increases the risk that the single private organization captures its regulators and ends up without effective oversight. To put it bluntly: I think it's going to be easier, politically, to regulate this technology with it being a battle between Microsoft, Meta, and Google all focused on commercial applications, than with the clearly dominant organization being a nonprofit that is supposedly altruistic and self-regulating.
I have sympathy for people who think that all sounds like a bad outcome because they are skeptical of politics and trust the big brains at OpenAI more. But personally I think governments have the ultimate responsibility to look out for the interests of the societies they govern.
Um, have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology in the face of an economic incentive to ignore or minimize "externalities" and long-term risk for short-term gain.
The idea of having a non-profit, with an explicit mandate to use to pursue the benefit of all mankind, be the first one to achieve the next levels of technology was at least worth a shot. OpenAI's existence doesn't stop other companies from pursuing technology, nor does it prevent governments doing coordination. But it at least gives a chance that a potentially dangerous technology will go in the right direction.
Most of those problems have been solved or at least been reduced by regulation. Regulators however aren't all knowing gods and one finds out about risks and problems only later, but except for smoking regulators have covered those aspects (and anti-smoking laws become stricter, generally, depending on country, regularly, but it's a cultural habit older than most states ...)
You aren't wrong that government regulation is not a great solution, but I believe it is - like democracy, and for the same reasons - the worst solution, except for all the others.
I don't disagree that using a non-profit to enforce self-regulation was "worth a shot", but I thought it was very unlikely to succeed at that goal, and indeed has been failing to succeed at that goal for a very long time. But I'm not mad at them for trying.
(I do think too many people used this as an excuse to argue against any government oversight by saying, "we don't need that, we have a self-regulating non-profit structure!", I think mostly cynically.)
> But it at least gives a chance that a potentially dangerous technology will go in the right direction.
I know you wrote this comment a full five hours ago and stuff has been moving quickly, but I think this needs to be in the past tense. It appears to be clear now that something approaching >90% of the OpenAI staff did not believe in this mission, and thus it was never going to work.
If you care about this, I think you need to be thinking about what else to pursue to give us that chance. I personally think government regulation is the only plausible option to pursue here, but I won't begrudge folks who want to keep trying more novel ideas.
(And FWIW, I don't personally share the humanity-destroying concerns people have; but I think regulation is almost always appropriate for big new technologies to some degree, and that this is no exception.)