It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
that’s the real lesson here. we can want to redo OpenAI all we want but the people will not use their discretion in funding it until they can make a return
it turned out that AI research required $ billions to run the LLMs, something that was not originally anticipated; and the only way to get that kind of money is to sell your future (and your soul) to investors who want to see a substantial return