It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
Whatever has been written can be unwritten and if that fails, just start a new company with the same employees.
The only organizations for which that is a persistent requirement are typically things like priest hoods
People are not interchangeable.
Most employees may have bills to pay, and will follow the money. The ones that matter most would have different motivation.
Of course, of your sole goal is to create a husk that milks the achievement of the original team as long as it lasts and nothing else — sure, you can do that.
But the "organizational desires" are still desires of people in the organization. And if those people are the ducks that lay the golden eggs, it might not be the smartest move to ignore them to prioritize the desires of the market for those eggs.
The market is all too happy to kill the ducks if it means more, cheaper eggs today.
Which is, as the adage goes, why we can't have the good things.
If you hire people who want to cash out then you’ll get people who prioritize prospects for cashing out
Set another way they did not focus on the theoretical public mission enough that it was core to the every day being of the organization much like it is for Medicins San Frontiers etc.