It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
What if we just made it illegal for corporate entities (including nonprofits) to lie? If a company promises to undertake some action that's within its capacity (as opposed to stating goals for a future which may or may not be achievable due to external conditions), then it has to do with a specified timeframe and if it doesn't happen they can be sued or prosecuted.
> But then they will just avoid making promises
And the markets they operate in, whether commercial or not, will judge them accordingly.