It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.
If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.
This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).
This makes absolutely no sense.
This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.
The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.
https://www.cyberark.com/resources/blog/apt29s-attack-on-mic...
It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.
Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.
(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)
(Not saying OpenAI isn't greedy)