Effective altruism, eh?
It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.
If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.
This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).
This makes absolutely no sense.
The risk is that he’s too confident and screws it up. Or continues on the growth path and becomes the person everyone seems to accuse him of being. But I think he’s not interested in petty shit, scratching around for a few bucks. Why, when you can (try) save the world.
If you need money to run the publicly released thing you underpriced to seize market share...
... you could also just, not?
And stick to research and releasing results.
At what point does it stop being "necessary" for OpenAI to do bad things to stay competitive and start being about them just running the standard VC playbook underneath a non-profit umbrella?
After ChatGPT was not released to the public, every for-profit raced to reproduce and improve on it. The decision not to release early and often with a restrictive license helped create that competition for funds and talent. If the company had been truly open, competition would have either had the choice of moving quickly, spending less money and contributing to the common core, or spending more money, going slower as they clean room implement the open code they can't use, and trying to compete alone. This might have been a huge win for the open source model, making the profitable decision to be to contribute to the commons.
This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.
The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.
https://www.cyberark.com/resources/blog/apt29s-attack-on-mic...
It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.
Huh? There's no secret to building these LLM-based "AI"s - they all use the same "transformer" architecture that was published by Google. You can find step-by-step YouTube tutorials on how to build one yourself if you want to.
All that OpenAI did was build a series of progressively larger transformers, trained on progressively larger training sets, and document how the capabilities expanded as you scaled them up. Anyone paying attention could have done the same at any stage if they wanted to.
The expense of recreating what OpenAI have built isn't in having to recreate some secret architecture that OpenAI have kept secret. The expense is in obtaining the training data and training the model.
Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.
(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)
(Not saying OpenAI isn't greedy)