It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.
And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.
We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.
[edit - eliminated specific company mentions]
The "Defense of Marriage Act" comes to mind. There was one so bad that a judge ordered the authors to change it, but I can't find it at the moment.
Defense of Marriage Act is actually an exception. The people supporting it honestly thought it was defending marriage, and the supportive public knew exactly what it did.
It passed with a veto proof majority a few weeks before a presidential election, received tons of press, and nobody was confused about what it did.
Whereas the Inflation Reduction Act had absolutely nothing to do with reducing inflation.
Seems arbitrary. There is nothing about that act that even borders on defending marriage, and people supporting it know that. It's a comic misnomer.