If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
That’s just a different flavour of enforced right-think.
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
Though I see no reason whatsoever why LLM should be blocked from answering "how do I make a nuclear bomb?" query.
"if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.
Censorship is censorship is censorship.
If you don't believe US has elections then straighten up your tinfoil hat:)
Maybe you'll say next the earth is flat, if you think people have nothing better to do but to find ways to lie to you.
Again, consider my example about YouTube - it's not illegal for Google to put pornography on YouTube. They still moderate it out though, not because they want to "censor" their users but because amateur porn is a liability nightmare to moderate. Similarly, I don't think ChatGPT's limitations qualify as censorship.
"Censorship is censorship is censorship" is the sort of defense you'd rely on if you were caught selling guns and kiddie porn on the internet. It's not the sort of defense OpenAI needs to use though, because they have a semblance of self-preservation instinct and would rather not let ChatGPT say something capable of pissing off the IMF or ADL. Call that "censorship" all you want - it's like canvassing for your right to yell 'fire!' in a movie theater.
Friend, neither of those is a body that can say constitution in US is null and void. Nor to they get to pick and choose which speech is kosher. It is not up to those orgs to decide.
<< They're accepting your definition of censorship to highlight how fucking stupid it is.
They are accepting it, because there is no way it cannot not be accepted. Now.. just because there is some cognitive dissonance over what should logically follow is a separate issue entirely.
Best I can do is spread some seeds..
Of course the market being extremely concentrated and effectively an oligopoly even in the best case does shine a somewhat different light on it. Until/unless open models catch up both quality and accessibility wise.
i.e. denying someone who is running an online platform/community or training an LLM model or whatever the right to remove or not provide specific content is a clearly limiting their right to freedom of expression.
I was explaining why it is more harmful and thought you were arguing it is not harmful?
Censorship in the west and china are both done by unelected people