If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.
There's nothing wrong with changing your opinion based on fresh information.
I don't really get that twist. What "fresh" information arrived here suddenly? The structure they gave themselves was chosen explicitly with the risks of future developments in mind. In fact, that was why they chose that specific structure as outlined in the complaint. How can it now be called new information that there are actually risks involved? That was the whole premise of creating that organization in the form it was done to begin with!
When I was young I proudly insisted that all I ever wanted to eat was pizza. I am very glad that 1) I was allowed to evolve out of that desire, and 2) I am not constantly harangued as a hypocrite when I enjoy a nice salad.
What I believe doesn't matter. As an adult, if you set up contracts and structures based on principles which you bind yourself to, that's your decision. If you then convince people to join or support you based on those principles, you shouldn't be surprised if you get into trouble once you "change your opinion" and no longer fulfill your obligations.
> When I was young I proudly insisted that all I ever wanted to eat was pizza.
What a good thing that you can't set up a contract as a child, isn't it?
That gives a lot of leeway for honest or dishonest intent.
It's not like they've gone closed source as a company or threatened to run off to Microsoft as individuals or talked up the need for $7 trillion investment in semiconductors because they've evolved the understanding that the technology is too dangerous to turn into a mass market product they just happen to monopolise, is it?
From their charter: “resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I just thought it might be important to provide more context. See the other comments for a discussion on "when applicable". I think this misses the point here.
When OpenAI was founded it was expected that AGI would likely come out of Google, with OpenAI doing the world a favor by replicating this wonderful technology and giving it to the masses. One might have imagined AGI would be some Spock-like stone cold super intelligence.
As it turns out, OpenAI themselves were the first to create something AGI-like, so the role they envisaged for themselves was totally flipped. Not only this, but this AGI wasn't an engineered intelligence but rather a stochastic parrot, trained on the internet, and incredibly toxic; as much of a liability as a powerful tool.
OpenAIs founding mission of AI democracy has turned into one of protecting us from this bullshitting psychopath that they themselves created, while at the same time raising the billions of dollars it takes to iterate on something so dumb it needs to be retrained from scratch every time you want to update it.
What goes unsaid, perhaps, is that back then (before the transformer had even been invented, before AlphaGo, what people might have imagined AGI to look like (some kind of sterile super-intelligence) was very different from the LLM-based "AGI" that eventually emerged.
So, what changed, what was the fresh information that warranted a change of opinion that open source was not the safest approach?
I'd say a few things.
1) As it turned out, OpenAI themselves were the first to develop a fledgling AGI, so they were not in the role they envisaged of open sourcing something to counteract an evil closed source competitor.
2) The LLM-based form of AGI that OpenAI developed was really not what anyone imagined it would be. The danger of what OpenAI developed, so far, isn't some doomsday "AI takes over the world" scenario, but rather that it's inherently a super-toxic chatbot (did you see OpenAI's examples of how it was before RLHF ?!) that is potentially disruptive and negative to society because of what it is rather than because of it's intelligence. The danger (and remedy) is not, so far, what OpenAI originally thought it would be.
3) OpenAI have been quite open about this in the past: Musk leaving, being their major source of funds, forced OpenAI to make changes in how they were funded. At the same time as this was happening (around GPT 2.0), it was becoming evident how extraordinarily expensive this unanticipated path to AGI was going to be to continue developing (Altman has indicated a cost of $100M+ to train GPT-3 - maybe including hardware). They were no longer looking for a benefactor like Musk willing/able to donate a few $10's of millions, but needed a partner able to put billions into the effort, which necessitated an investor expecting a return on investment, and hence the corporate structure change to accommodate that.
Explanation: Reducing the discussion to the two words "when applicable" (especially when ripped out of context) might be relevant in the legal sense, but totally misses the bigger picture of the discussion here. I don't like being dragged on those tangents when they can be expected to only distract from the actual point being discussed - or result in a degraded discussion about the meaning of words. I could, for instance, argue that it says "when" and not "if" which wouldn't get us anywhere and hence is a depressing and fruitless endeavor. It isn't as easy as that and the matter needs to be looked at broadly, considering all relevant aspects and not just two words.
For reference, see the top comment, which clearly mentions the "when applicable" in context and then outlines that, in general, OpenAI doesn't seem to do what they have promised.
And here's a sub thread that goes into detail on the two words: