Edit: Well, I guess these tweets explain the beef well -
https://twitter.com/elonmusk/status/1606642155346612229
https://twitter.com/elonmusk/status/1626516035863212034
“This is ridiculous,” he said, according to multiple sources with direct knowledge of the meeting. “I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”
- https://www.theverge.com/2023/2/9/23593099/elon-musk-twitter...
By Monday afternoon, “the problem” had been “fixed.” Twitter deployed code to automatically “greenlight” all of Musk’s tweets, meaning his posts will bypass Twitter’s filters designed to show people the best content possible. The algorithm now artificially boosted Musk’s tweets by a factor of 1,000 – a constant score that ensured his tweets rank higher than anyone else’s in the feed.
- https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets...
It is similar to what Microsoft did with Facebook in the early days of slowing acquiring a stake in the company. But this is an aggressive version of that with OpenAI. What you have now is the exact opposite of their original goals in: [0]
Before:
> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. [0]
After:
> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. [1]
The real 'Open AI' is Stability AI, since they are more willing to release their work and AI models rather than OpenAI was supposed to do.
Microsoft doesn't just provide hardware, it invested literal 10 billion dollars into OAI (https://www.bloomberg.com/news/articles/2023-01-23/microsoft...). It's fair to say OpenAI is Microsoft's extension now and we should be proportionately wary of what they do, knowing what MS usually does
https://en.wikipedia.org/wiki/Stable_Diffusion
It seems to come with a laundry list of vague restrictions
https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
>“We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
The Open was a joke even then, but they publish a DWG design spec for free at least, spec'ed with about 50% of their internal parser.
A few discussions here https://www.teenvogue.com/story/non-profit-industrial-comple...
"Under the nonprofit-corporate complex, the absorption of radical movements is ensured through the establishment of patronage relationships between the state and/or private capital and social movements. Ideological repression and institutional subordination is based on “a bureaucratized management of fear that mitigates against the radical break with owning-class capital (read: foundation support) and hegemonic common sense (read: law and order).”
https://monthlyreview.org/2015/04/01/the-nonprofit-corporate...
Yes I know you know rurban! I'm linking for the rest of HN :-)
BTW I want to merge your solvespace integration this year, and hopefully dump libdxfrw entirely ;-)
https://en.wikipedia.org/wiki/OpenAI#:~:text=The%20organizat....
I would add this one:
https://twitter.com/elonmusk/status/1630640058507116553
I had no idea about this drama either, so I didn't understand what Elon was talking about, now it seems clear.
But "Based"? Is it the name of his new AI company? Where does that come from?
https://www.irs.gov/charities-non-profits/charitable-organiz...
OpenAI is still a nonprofit. Their Financials are public. A lot of folks use "profit" in a hand-wavey sense to describe something they don't like, like an organization sitting on cash or paying key employees more than they expect. The organization may not be doing what donors thought it would with their money, but that doesn't necessarily mean cash retained is profit.
Recent filings show the organization has substantially cut its compensation for key employees year after year. It's sitting on quite a bit of cash, but I think that is expected given the scope of their work.
That said, their Financials from 2019 look a little weird. They reported considerable negative expenses, including negative salaries (what did they do, a bunch of clawbacks?), and had no fundraising expenses.
https://projects.propublica.org/nonprofits/organizations/810...
[1] https://smallbusiness.chron.com/difference-between-nonprofit...
https://www.vice.com/en/article/dy7nby/researchers-think-ai-...
[1] The freedom to run the program as you wish, for any purpose (freedom 0). https://www.gnu.org/philosophy/free-sw.en.html
Second: I'm just as concerned about automated generation of propaganda as they seem to be. Given what LLMs are currently capable of doing, a free cyber-Goebbels for every hate group is the default: the AI itself only cares about predicting the next token, not the impact of having done so.
Edit:
Also, the headline of the Vice story you linked to is misleading given the source document that the body linked to.
1. Of the 6 researchers listed as authors of that report, only 2 are from OpenAI
2. Reduced exports of chips from the USA are discussed only briefly within that report, as part of a broader comparison with all the other possible ways to mitigate the various risks
3. Limited chip exports does nothing to prevent domestic propaganda and research
https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...
But OpenAI isn’t a nonprofit. It all depends on what do you mean by OpenAI - and what you call OpenAI is not what they call OpenAI.
https://openai.com/blog/openai-lp
> Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit.”
These things are invariably popular, repetitive, and nasty. You may not owe $BILLIONAIRE better but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
(We detached this subthread from https://news.ycombinator.com/item?id=34980579)
Essentially, we just end up encoding all of our flaws back into the machine one way or another.
This is the argument I laid out in the Bias Paradox.
Nowadays, companies and politicians, if one could make such a distinction just for the sake of the argument, will always tout the "job creation" aspect of a certain capitalistic endeavour. Give it a few months/years and we will hear the phrase "job elimination" more and more, from cashiers becoming "consultants" to the elimination of 90+% of interface jobs and beyond: does there really need to be a human hand to push the button for espresso? does there really need to be a bipedal human to move a package from A to B in a warehouse?
[1] https://arstechnica.com/information-technology/2023/02/robot...
Of course there is. See (Cameron, 1984) (https://en.wikipedia.org/wiki/The_Terminator)
https://openai.com/blog/planning-for-agi-and-beyond/
Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.
https://en.wikipedia.org/wiki/Kyle_Chapman_(American_activis...
Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.
I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...
> Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies. [1]
In a bit of historical irony, the mathematics underpinning the development of early stealth aircraft was based on published research by a Soviet scientist: https://en.wikipedia.org/wiki/Pyotr_Ufimtsev
Also https://www.newyorker.com/humor/daily-shouts/l-p-d-libertari... if you've never read it, may have to open in incognito to avoid a paywall.
https://dallasinnovates.com/exclusive-qa-john-carmacks-diffe...
It makes sense. OpenAI would be dead if they remained a non-profit. They couldn't possibly hope to raise enough to achieve the vision.
Microsoft wouldn't have been willing to bankroll all of their compute without them converting to a for-profit, too.
Personally, I'd rather have a ClosedOpenAI (lol) than NoOpenAI.
And their actions, like making the ChatGPT API insanely cheap, at least shows their willingness to make it as accessible as possible.