It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.
Indeed, I think trying to do it that way increases the risk that the single private organization captures its regulators and ends up without effective oversight. To put it bluntly: I think it's going to be easier, politically, to regulate this technology with it being a battle between Microsoft, Meta, and Google all focused on commercial applications, than with the clearly dominant organization being a nonprofit that is supposedly altruistic and self-regulating.
I have sympathy for people who think that all sounds like a bad outcome because they are skeptical of politics and trust the big brains at OpenAI more. But personally I think governments have the ultimate responsibility to look out for the interests of the societies they govern.
Um, have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology in the face of an economic incentive to ignore or minimize "externalities" and long-term risk for short-term gain.
The idea of having a non-profit, with an explicit mandate to use to pursue the benefit of all mankind, be the first one to achieve the next levels of technology was at least worth a shot. OpenAI's existence doesn't stop other companies from pursuing technology, nor does it prevent governments doing coordination. But it at least gives a chance that a potentially dangerous technology will go in the right direction.