Conveniently this also helps them build a monopoly. It is pretty aggravating that they're bastardizing and abusing terms like 'safety' and 'democratization' while doing this. I hope they'll fail in their attempts, or that the competition rolls over them rather sooner than later.
I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.
That should be the goal.
Me too, in comparison all the other potential threats discussed over here feel mostly secondary to me. I'm also suspecting that at the point where these AIs reach a more AGI level, the big players who have them will just not provide any kind of access all together and just use them to churn out an infinite amount of money-making applications instead.
the issue here is that a 'Linux' of AI would be happy to use the N-word and stuff like that. It's politically untenable.
I do think you're probably right about AI though. Too many influential groups are going to get too mad about the words an open model will output. Only allowing locked down models is going to severly limit their usefullness for all sorts of novel creative and productive use cases.