Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.
If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.
There's nothing wrong with changing your opinion based on fresh information.
I don't really get that twist. What "fresh" information arrived here suddenly? The structure they gave themselves was chosen explicitly with the risks of future developments in mind. In fact, that was why they chose that specific structure as outlined in the complaint. How can it now be called new information that there are actually risks involved? That was the whole premise of creating that organization in the form it was done to begin with!
When OpenAI was founded it was expected that AGI would likely come out of Google, with OpenAI doing the world a favor by replicating this wonderful technology and giving it to the masses. One might have imagined AGI would be some Spock-like stone cold super intelligence.
As it turns out, OpenAI themselves were the first to create something AGI-like, so the role they envisaged for themselves was totally flipped. Not only this, but this AGI wasn't an engineered intelligence but rather a stochastic parrot, trained on the internet, and incredibly toxic; as much of a liability as a powerful tool.
OpenAIs founding mission of AI democracy has turned into one of protecting us from this bullshitting psychopath that they themselves created, while at the same time raising the billions of dollars it takes to iterate on something so dumb it needs to be retrained from scratch every time you want to update it.