It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
Whatever has been written can be unwritten and if that fails, just start a new company with the same employees.
The only organizations for which that is a persistent requirement are typically things like priest hoods
People are not interchangeable.
Most employees may have bills to pay, and will follow the money. The ones that matter most would have different motivation.
Of course, of your sole goal is to create a husk that milks the achievement of the original team as long as it lasts and nothing else — sure, you can do that.
But the "organizational desires" are still desires of people in the organization. And if those people are the ducks that lay the golden eggs, it might not be the smartest move to ignore them to prioritize the desires of the market for those eggs.
The market is all too happy to kill the ducks if it means more, cheaper eggs today.
Which is, as the adage goes, why we can't have the good things.
It always rubs me the wrong way when people justify going for more money as "having bills to pay". No they don't, this makes it seems as if they're down on their luck and have to hustle to pay bills which is far from reality. I am not shaming people for wanting more money of course, but after a certain threshold, framing it as an external necessity is dishonest.