It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
It's honestly kind of frustrating to me how the tech space continues to just excuse this. Every major new technology since I've been paying attention (2004 ish?) has gone this exact same way. Someone builds some cool new thing, then dillholes with money invest in it, it becomes a product, it becomes enshittified, and people bemoan that process while looking for new shiny things. Like, I'm all for new shiny things, but what if we just stopped letting the rest become enshittified?
As much as people have told me all my life that the profit motive makes companies compete to deliver the best products, I don't know that I've ever actually seen that pan out in my fucking life. What it does is it flattens all products offered in a given market to whatever set of often highly arbitrary and random aspects all the competitors seem to think is the most important. For an example, look at short form video, which started with Vine, was perfected by TikTok, and is now being hamfisted into Instagram, Facebook, Twitter, YouTube despite not really making any sense in those contexts. But the "market" decided that short form video is important, therefore everything must now have it even if it makes no sense in the larger product.