If I donated millions to them, I’d be furious.
What gives me even less sympathy for Altman is that he took OpenAI, whose mission was open AI, and turned it not only closed but then immediately started a world tour trying to weaponize fear-mongering to convince governments to effectively outlaw actually open AI.
Why is Worldcoin a grift?
And I believe his argument for it not being open is safety.
You cannot abandon your non-profit's entire mission on a highly hypothetical, controversial pretext. Moreover, they've released virtually no harmless details on GPT-4, yet let anyone use GPT-4 (such safety!), and haven't even released GPT-3, a model with far fewer capabilities than many open-source alternatives. (None of which have ended the world! What a surprise!)
They plainly wish to make a private cash cow atop non-profit donations to an open cause. They hit upon wild success, and want to keep it for themselves; this is precisely the opposite of their mission. It's morally, and hopefully legally, unacceptable.
"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact." - https://openai.com/blog/introducing-openai
I'm not actually sure which of these points you're objecting to, given you dispute the dangers as well as getting angry about the money making, but even in that blog post they cared about risks: "It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly."
GPT-4 had a ~100 page report, which included generations that were deemed unsafe which the red reaming found, and which they took steps to prevent in the public release. The argument for having any public access is the same as the one which Open Source advocates use for source code: more eyeballs.
I don't know if it's a correct argument, but it's at least not obviously stupid.
> (None of which have ended the world! What a surprise!)
If it had literally ended the world, we wouldn't be here to talk about it.
If you don't know how much plutonium makes a critical mass, only a fool would bang lumps of the stuff together to keep warm and respond to all the nay-sayers with the argument "you were foolish to even tell me there was a danger!" even while it's clear that everyone wants bigger rocks…
And yet at the same time, the free LLMs (along with the image generators) have made a huge dent in the kinds of content one can find online, further eroding the trustworthiness of the internet, which was already struggling.
> They hit upon wild success, and want to keep it for themselves; this is precisely the opposite of their mission. It's morally, and hopefully legally, unacceptable.
By telling the governments "regulate us, don't regulate our competitors, don't regulate open source"? No. You're just buying into a particular narrative, like most of us do most of the time. (So am I, of course. Even though I have no idea how to think of the guy himself, and am aware of misjudging other tech leaders in both directions, that too is a narrative).
How was it unsafe? How was those generations causing harm? (Curious, Just in case somebody read the report)