zlacker

OpenAI is now everything it promised not to be: closed-source and for-profit

submitted by isaacf+(OP) on 2023-03-01 08:21:22 | 1623 points 543 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
2. neom+E9[view] [source] 2023-03-01 09:57:52
>>isaacf+(OP)
I find it a little odd that Elon seems to take a swipe at OpenAI any opportunity he gets. If he cares so much about them not making money, maybe he should have put his twitter cash there instead? It's reassuring to me that the two people running policy work at the big AI "startups", Jack Clark (Anthropic) and Miles Brundage (OpenAI, who was hired by Jack iirc), are genuinely good humans. I've known Jack for 10 years and he's for sure a measured and reasonable person who cares about not doing harm. Although I don't know Miles, my understanding is he has similar qualities. If they're gonna be for profit, I feel this is really important.

Edit: Well, I guess these tweets explain the beef well -

https://twitter.com/elonmusk/status/1606642155346612229

https://twitter.com/elonmusk/status/1626516035863212034

https://twitter.com/elonmusk/status/1599291104687374338

https://twitter.com/elonmusk/status/1096987465326374912

◧◩
12. pavlov+Ed[view] [source] [discussion] 2023-03-01 10:39:06
>>3D2902+ac
He's lost himself in the fake popularity of being a social media celebrity. He started believing that having 100 million followers on a web site really means that a continent's worth of people adore you. For all his complaints about bots after he got cold feet on the Twitter purchase, he seems strangely naïve about how social media really works and what's real there.

“This is ridiculous,” he said, according to multiple sources with direct knowledge of the meeting. “I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”

- https://www.theverge.com/2023/2/9/23593099/elon-musk-twitter...

By Monday afternoon, “the problem” had been “fixed.” Twitter deployed code to automatically “greenlight” all of Musk’s tweets, meaning his posts will bypass Twitter’s filters designed to show people the best content possible. The algorithm now artificially boosted Musk’s tweets by a factor of 1,000 – a constant score that ensured his tweets rank higher than anyone else’s in the feed.

- https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets...

◧◩◪
25. rvz+uf[view] [source] [discussion] 2023-03-01 10:54:05
>>lfkdev+Db
He is definitely right there. At this point, I consider OpenAI partially acquired by Microsoft, since it is almost majority controlled by them. It is essentially a Microsoft AI division.

It is similar to what Microsoft did with Facebook in the early days of slowing acquiring a stake in the company. But this is an aggressive version of that with OpenAI. What you have now is the exact opposite of their original goals in: [0]

Before:

> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. [0]

After:

> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. [1]

The real 'Open AI' is Stability AI, since they are more willing to release their work and AI models rather than OpenAI was supposed to do.

[0] https://openai.com/blog/introducing-openai/

[1] https://openai.com/blog/planning-for-agi-and-beyond

◧◩
40. throwa+gh[view] [source] [discussion] 2023-03-01 11:11:20
>>supriy+ta
> Microsoft’s investment into the company through providing Azure services

Microsoft doesn't just provide hardware, it invested literal 10 billion dollars into OAI (https://www.bloomberg.com/news/articles/2023-01-23/microsoft...). It's fair to say OpenAI is Microsoft's extension now and we should be proportionately wary of what they do, knowing what MS usually does

◧◩
58. permo-+tk[view] [source] [discussion] 2023-03-01 11:41:11
>>phtriv+1k
https://huggingface.co/bigscience/bloom

open-source and funded by the French government

◧◩◪◨
76. geokon+5m[view] [source] [discussion] 2023-03-01 11:56:50
>>zirgs+Xi
Are you sure? According to Wikipedia it's "source available"

https://en.wikipedia.org/wiki/Stable_Diffusion

It seems to come with a laundry list of vague restrictions

79. Klonoa+dm[view] [source] 2023-03-01 11:57:41
>>isaacf+(OP)
Huh, I was reading this old New Yorker piece earlier today - how timely I guess.

https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

>“We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”

◧◩
85. rurban+in[view] [source] [discussion] 2023-03-01 12:06:50
>>panick+dj
The OpenDWG Alliance (1998) e.g. https://de.wikipedia.org/wiki/Open_Design_Alliance (since 2002)

The Open was a joke even then, but they publish a DWG design spec for free at least, spec'ed with about 50% of their internal parser.

◧◩
89. charle+Un[view] [source] [discussion] 2023-03-01 12:11:11
>>say_it+J9
In the context of the u.s. system of capitalism, "non-profit" does not equate to "non-capitalist". The institutions still serve capital (if we look to Marx's conception of capital as a process) -- at the very least, in avoidance of taxes, R&D to ultimately increase productivity, pacification of potential insurgent action (e.g. Andrew Carnegie's intent). We should distinguish between the role of local, community run organizations and "non-profits" that receive millions in funding from venture _capitalists_.

A few discussions here https://www.teenvogue.com/story/non-profit-industrial-comple...

"Under the nonprofit-corporate complex, the absorption of radical movements is ensured through the establishment of patronage relationships between the state and/or private capital and social movements. Ideological repression and institutional subordination is based on “a bureaucratized management of fear that mitigates against the radical break with owning-class capital (read: foundation support) and hegemonic common sense (read: law and order).”

https://monthlyreview.org/2015/04/01/the-nonprofit-corporate...

◧◩◪
103. phkahl+Xp[view] [source] [discussion] 2023-03-01 12:25:48
>>rurban+in
Hence the need for LibreDWG https://www.gnu.org/software/libredwg/

Yes I know you know rurban! I'm linking for the rest of HN :-)

BTW I want to merge your solvespace integration this year, and hopefully dump libdxfrw entirely ;-)

◧◩
106. phkahl+9q[view] [source] [discussion] 2023-03-01 12:27:41
>>toyg+0e
How about VLC:

https://www.reddit.com/r/bestof/comments/73dafr/vlc_creator_...

◧◩◪◨
109. geeB+Kq[view] [source] [discussion] 2023-03-01 12:32:43
>>selest+Ql
https://news.ycombinator.com/item?id=33403233
◧◩◪
111. atlasu+Pq[view] [source] [discussion] 2023-03-01 12:33:14
>>andrep+io
Isn't that the point that it wasn't founded as a corporation, nor by Sam Altman?

https://en.wikipedia.org/wiki/OpenAI#:~:text=The%20organizat....

◧◩◪◨
146. narag+nv[view] [source] [discussion] 2023-03-01 13:07:13
>>rainco+Ge
He edited the comment and linked relevant tweets.

I would add this one:

https://twitter.com/elonmusk/status/1630640058507116553

I had no idea about this drama either, so I didn't understand what Elon was talking about, now it seems clear.

But "Based"? Is it the name of his new AI company? Where does that come from?

◧◩◪
155. nulbyt+2x[view] [source] [discussion] 2023-03-01 13:16:55
>>olalon+Xn
You can't. In the U.S., the assets of charity must be permanently dedicated to an exempt purpose.

https://www.irs.gov/charities-non-profits/charitable-organiz...

OpenAI is still a nonprofit. Their Financials are public. A lot of folks use "profit" in a hand-wavey sense to describe something they don't like, like an organization sitting on cash or paying key employees more than they expect. The organization may not be doing what donors thought it would with their money, but that doesn't necessarily mean cash retained is profit.

Recent filings show the organization has substantially cut its compensation for key employees year after year. It's sitting on quite a bit of cash, but I think that is expected given the scope of their work.

That said, their Financials from 2019 look a little weird. They reported considerable negative expenses, including negative salaries (what did they do, a bunch of clawbacks?), and had no fundraising expenses.

https://projects.propublica.org/nonprofits/organizations/810...

◧◩◪◨
159. IncRnd+lx[view] [source] [discussion] 2023-03-01 13:18:58
>>atlasu+Pq
"Non-profit" refers to the tax-exempt status. "Corporation" is an organizational structure. The designations do not have to overlap but sometimes do. [1] It isn't clear from the wiki page whether it was founded as a corporation or not. The content of the filings are the only way to tell.

[1] https://smallbusiness.chron.com/difference-between-nonprofit...

◧◩◪◨⬒
162. whywhy+cy[view] [source] [discussion] 2023-03-01 13:25:56
>>ben_w+hu
That car analogy doesn't work because Mercedes Benz didn't actively work against other cars from existing.

https://www.vice.com/en/article/dy7nby/researchers-think-ai-...

◧◩◪◨⬒
166. Alexan+Yy[view] [source] [discussion] 2023-03-01 13:30:50
>>iib+qw
I kind of see their point. Freedom 0 is about the freedom to run software how you wish[1] and "commercial use" can encompass everything from FAANG down to one-man, niche businesses.

[1] The freedom to run the program as you wish, for any purpose (freedom 0). https://www.gnu.org/philosophy/free-sw.en.html

176. KoftaB+sC[view] [source] 2023-03-01 13:52:11
>>isaacf+(OP)
More info I found about their organizational structure: https://openai.com/blog/openai-lp
◧◩◪◨⬒⬓
179. ben_w+jE[view] [source] [discussion] 2023-03-01 14:03:06
>>whywhy+cy
First: Crumple zones have since become mandatory, which pattern matches what would happen if OpenAI gets what they ask for in that link.

Second: I'm just as concerned about automated generation of propaganda as they seem to be. Given what LLMs are currently capable of doing, a free cyber-Goebbels for every hate group is the default: the AI itself only cares about predicting the next token, not the impact of having done so.

Edit:

Also, the headline of the Vice story you linked to is misleading given the source document that the body linked to.

1. Of the 6 researchers listed as authors of that report, only 2 are from OpenAI

2. Reduced exports of chips from the USA are discussed only briefly within that report, as part of a broader comparison with all the other possible ways to mitigate the various risks

3. Limited chip exports does nothing to prevent domestic propaganda and research

https://cdn.openai.com/papers/forecasting-misuse.pdf

196. mhb+jJ[view] [source] 2023-03-01 14:31:57
>>isaacf+(OP)
Well, Eliezer thinks it's better for it to be closed source:

https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...

https://news.ycombinator.com/item?id=34969892

◧◩◪◨⬒⬓
202. Tijdre+UJ[view] [source] [discussion] 2023-03-01 14:35:06
>>bradle+yI
https://en.wikipedia.org/wiki/IKEA#Corporate_structure
◧◩◪◨
213. kgwgk+jO[view] [source] [discussion] 2023-03-01 14:59:54
>>nulbyt+2x
> OpenAI is still a nonprofit.

But OpenAI isn’t a nonprofit. It all depends on what do you mean by OpenAI - and what you call OpenAI is not what they call OpenAI.

https://openai.com/blog/openai-lp

> Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit.”

◧◩◪◨⬒
215. kgwgk+GP[view] [source] [discussion] 2023-03-01 15:09:11
>>brooks+DB
https://openai.com/blog/openai-lp
◧◩
219. dang+GQ[view] [source] [discussion] 2023-03-01 15:14:33
>>3D2902+ac
"Eschew flamebait. Avoid generic tangents."

These things are invariably popular, repetitive, and nasty. You may not owe $BILLIONAIRE better but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

(We detached this subthread from https://news.ycombinator.com/item?id=34980579)

230. 13year+yT[view] [source] 2023-03-01 15:31:21
>>isaacf+(OP)
There are many that are looking for AI to be an escape from all of humanities flaws. We can't ignore the feedback loop in the creation of AI.

Essentially, we just end up encoding all of our flaws back into the machine one way or another.

This is the argument I laid out in the Bias Paradox.

https://dakara.substack.com/p/ai-the-bias-paradox

◧◩◪◨⬒
272. croes+H81[view] [source] [discussion] 2023-03-01 16:38:40
>>tapoxi+wF
Or Bertelsmann

https://en.m.wikipedia.org/wiki/Bertelsmann_Stiftung

◧◩
274. ly3xqh+S81[view] [source] [discussion] 2023-03-01 16:39:10
>>colleg+yc
I suppose the next step beyond surveillance capitalism, given the application of language models to robotic actuators [1], but also as a general trend, will be jobless capitalism.

Nowadays, companies and politicians, if one could make such a distinction just for the sake of the argument, will always tout the "job creation" aspect of a certain capitalistic endeavour. Give it a few months/years and we will hear the phrase "job elimination" more and more, from cashiers becoming "consultants" to the elimination of 90+% of interface jobs and beyond: does there really need to be a human hand to push the button for espresso? does there really need to be a bipedal human to move a package from A to B in a warehouse?

[1] https://arstechnica.com/information-technology/2023/02/robot...

◧◩◪◨⬒⬓⬔
286. YeGobl+wd1[view] [source] [discussion] 2023-03-01 16:55:47
>>novaRo+bY
>> Is it just a symbolic metaphor or is there any scientific research behind this words?

Of course there is. See (Cameron, 1984) (https://en.wikipedia.org/wiki/The_Terminator)

◧◩◪◨
289. therea+ve1[view] [source] [discussion] 2023-03-01 16:59:20
>>fragsw+291
For long form, I’d suggest cold-takes blog who is very systematic thinker and has been focusing on agi risk recently. https://www.cold-takes.com
◧◩◪◨
299. adamsm+Dh1[view] [source] [discussion] 2023-03-01 17:10:39
>>fragsw+291
They are releasing powerful AI tools at an alarming rate before Safety, and I mean real Safety, Researchers have a chance to understand their implication. They are generating an enormous amount of buzz and hype which is fueling a coming AI arms race that is extremely dangerous. The Control Problem is very real and becoming more pressing as things accelerate. Sam has recently given lip service to caring about the problem but OpenAI's actions seem to indicate it's not a major priority. There was a hint that they cared when they thought GPT-2 was too dangerous to release publicly but at this point if they were serious about safety no model past ChatGPT and Bing would be released to the public at all, full stop.

https://openai.com/blog/planning-for-agi-and-beyond/

Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

◧◩◪◨⬒
305. projek+sk1[view] [source] [discussion] 2023-03-01 17:20:29
>>narag+nv
With the baseball bat it evokes Kyle Chapman.

https://en.wikipedia.org/wiki/Kyle_Chapman_(American_activis...

◧◩◪◨⬒
309. sharkj+3l1[view] [source] [discussion] 2023-03-01 17:23:09
>>startu+wa1
> As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems.

https://openai.com/blog/planning-for-agi-and-beyond

◧◩◪◨⬒
341. Fillig+ix1[view] [source] [discussion] 2023-03-01 18:03:23
>>adamsm+Dh1
> Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.

I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...

◧◩◪◨⬒
391. shrimp+0V1[view] [source] [discussion] 2023-03-01 19:50:26
>>Loughl+GI1
Here it is: https://worldcoin.org/
◧◩
429. ozten+082[view] [source] [discussion] 2023-03-01 20:52:32
>>yonran+g62
> OpenAI

> Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies. [1]

[1] https://openai.com/blog/introducing-openai

◧◩◪
431. wetmor+z82[view] [source] [discussion] 2023-03-01 20:54:50
>>est31+ln
> It's published research, not hidden in secrecy, like say how to build planes invisible under radar.

In a bit of historical irony, the mathematics underpinning the development of early stealth aircraft was based on published research by a Soviet scientist: https://en.wikipedia.org/wiki/Pyotr_Ufimtsev

◧◩◪◨
434. pixl97+T82[view] [source] [discussion] 2023-03-01 20:56:31
>>eatsyo+6o1
As a libertarian I would come give you a hug on your views but I'd have to drive on public roads to get there so we'll just wave from afar.

Also https://www.newyorker.com/humor/daily-shouts/l-p-d-libertari... if you've never read it, may have to open in incognito to avoid a paywall.

◧◩◪
448. MrScru+le2[view] [source] [discussion] 2023-03-01 21:24:55
>>Teever+2t1
Well John Carmack is trying to make inroads towards AGI without going the huge compute route, so I don't think it's inherently obvious that it's the only game in town.

https://dallasinnovates.com/exclusive-qa-john-carmacks-diffe...

◧◩◪◨⬒⬓⬔
457. LordDr+Lj2[view] [source] [discussion] 2023-03-01 21:55:22
>>Julesm+mn1
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
◧◩◪◨⬒⬓⬔
458. LordDr+Uj2[view] [source] [discussion] 2023-03-01 21:56:18
>>novaRo+bY
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
◧◩
472. iKlsR+Kr2[view] [source] [discussion] 2023-03-01 22:44:53
>>mellos+pe
Random aside, they recently bought https://ai.com which redirects to chat gpt atm but it wouldn't surprise me if they plan to drop the open eventually or move their paid stuff to it as they do have open source stuff that merits the name.
◧◩◪◨⬒⬓⬔⧯
476. matt_h+Bx2[view] [source] [discussion] 2023-03-01 23:20:18
>>DiggyJ+Ho2
https://news.ycombinator.com/item?id=28369570

https://news.ycombinator.com/item?id=27013865

◧◩◪◨⬒⬓⬔⧯▣▦
495. hot_gr+H03[view] [source] [discussion] 2023-03-02 03:24:49
>>honkle+IZ2
Lil B said it's both: https://www.urbandictionary.com/define.php?term=Based%20God But only because he changed the meaning. Didn't realize there was an older one too.
539. mvkel+po8[view] [source] 2023-03-03 17:06:47
>>isaacf+(OP)
Isn't all of this explained in this blog post (https://openai.com/blog/planning-for-agi-and-beyond)?

It makes sense. OpenAI would be dead if they remained a non-profit. They couldn't possibly hope to raise enough to achieve the vision.

Microsoft wouldn't have been willing to bankroll all of their compute without them converting to a for-profit, too.

Personally, I'd rather have a ClosedOpenAI (lol) than NoOpenAI.

And their actions, like making the ChatGPT API insanely cheap, at least shows their willingness to make it as accessible as possible.

[go to top]