It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
Blaming "human nature" is an excuse that is popular among egomaniacs, but on even brief inspection it is transparently thin: Human nature includes plenty of non-profits and people who did great things for humanity for little or no gain (scientists, soldiers, public servants, even some sofware developers). It also includes people who have done horrible things.
Human nature really is that we have a choice. It's both a very old and fundamental part of human nature:
And the serpent said unto the woman, Ye shall not surely die:
For God doth know that in the day ye eat thereof, then your
eyes shall be opened, and ye shall be as gods, knowing
good and evil.
And when the woman saw that the tree was good for food, and
that it was pleasant to the eyes, and a tree to be
desired to make one wise, she took of the fruit thereof,
and did eat, and gave also unto her husband with her;
and he did eat.
And the eyes of them both were opened, and they knew that
they were naked; and they sewed fig leaves together, and
made themselves aprons.
That's the Tree of the Knowledge of Good and Evil, of course (Genesis 3). We know good and evil, we make our own choices; no blaming God or some outside force. If you do evil, it was your choice.