zlacker

OpenAI LP

submitted by gdb+(OP) on 2019-03-11 15:59:44 | 269 points 195 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
8. option+W4[view] [source] 2019-03-11 16:28:35
>>gdb+(OP)
And Khosla Ventures is one of their key investors.

Let's not forget that Khosla himself does not exactly care about public interest or existing laws https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...

◧◩
9. Bucket+P5[view] [source] [discussion] 2019-03-11 16:34:11
>>ckugbl+F4
This twitter may be of interest to you, they aggregate information on this topic. https://twitter.com/black_in_ai. I don't know why race is relevant to this article though. Must we make everything a race issue?
◧◩
14. mises+v7[view] [source] [discussion] 2019-03-11 16:44:26
>>option+W4
Non-AMP link: https://www.nytimes.com/2018/10/01/technology/california-bea...

I just read the article, and am not sure I see the issue. Quote from his lawyer: “No owner of private business should be forced to obtain a permit from the government before deciding who it wants to invite onto its property"

Where's the issue here? They guy basically bought the property all around the beach, and decided to close down access. I wouldn't say it's a nice thing to do, but it's legal. If I buy a piece of property, my rights as the owner should trump the rights of a bunch of surfers who want to get to a beach. The state probably should have been smart enough not to sell all the land.

Failing that, just seize a small portion via eminent domain: a 15-foot-wide strip on the edge of the property would likely come at a reasonable cost, and ought to provide an amicable resolution for all.

20. ktta+R8[view] [source] 2019-03-11 16:51:33
>>gdb+(OP)
Reactions on Reddit seem different from here - https://redd.it/azvbmn
◧◩
26. mikeyo+3a[view] [source] [discussion] 2019-03-11 16:58:37
>>m_ke+j8
Nope, it was mentioned last Friday in a blog post (that he was stepping down from YC):

https://blog.ycombinator.com/updates-from-yc/

And TechCrunch had a source last Friday as well that said Altman intended to become CEO of OpenAI:

https://techcrunch.com/2019/03/09/did-sam-altman-make-yc-bet...

◧◩
28. jackpi+qa[view] [source] [discussion] 2019-03-11 17:00:58
>>stevie+j9
That seems to be the general consensus of /r/MachineLearning as well: https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_o...
◧◩
53. gdb+ee[view] [source] [discussion] 2019-03-11 17:25:30
>>dannyk+Ea
Yes, OpenAI Nonprofit is a 501(c)(3) organization. Its mission is to ensure that artificial general intelligence benefits all of humanity. See our Charter for details: https://openai.com/charter/.

The Nonprofit would fail at this mission without raising billions of dollars, which is why we have designed this structure. If we succeed, we believe we'll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.

◧◩◪◨
55. gdb+ye[view] [source] [discussion] 2019-03-11 17:27:13
>>nharad+8e
The nonprofit board retains full control, and can only take actions that will further our mission.

As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.

◧◩◪◨⬒⬓
65. mises+eg[view] [source] [discussion] 2019-03-11 17:37:10
>>dEnigm+Vd
The Supreme Court didn't grant cert; that's different. That means they don't want to set precedent or believe sufficient precedent exists already. This was last adjudicated in 1999 with Saenz v. Roe, where California tried to set new residents' welfare to what they got in other states for one year. The court ruled this violated the constitutional protection of interstate travel, and upheld the view that the 14th amendment applied all constitutional rights to all states. Source: https://www.law.cornell.edu/wex/fourteenth_amendment_0

This undoubtedly then applies the 5th amendment takings clause: “…nor shall private property be taken for public use, without just compensation.” This is clearly violated in this sense, and the state cannot violate this right (see above).

The fact that the Supreme Court didn't grant cert probably means they believe there is already precedent here, or just as probably that they didn't have the time. They always have a full docket; they were probably just out of slots.

I urge others to rebut this from a legal sense, not just say they disagree. People keep killing my comments, but it seems like they all just dislike the "selfish" appearance of the actions.

◧◩◪
80. throwa+Qi[view] [source] [discussion] 2019-03-11 17:53:42
>>gdb+vd
Leaving aside the absolutely monumental if that's in that sentence, how does this square with the original OpenAI charter[1]:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?

Likewise, also from the OpenAI charter:

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.

______________________________

1. https://openai.com/charter/

◧◩◪◨
82. gdb+0k[view] [source] [discussion] 2019-03-11 18:01:02
>>throwa+Qi
The Charter was designed to capture our values, as we thought about how we'd create a structure that allows us to raise more money while staying true to our mission.

Some companies to compare with:

- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date

- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date

> you don't want to "unduly concentrate power"? How will this work?

Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.

◧◩
88. gdb+Kl[view] [source] [discussion] 2019-03-11 18:12:17
>>stevie+j9
> If the partner gets a vote/profit then a "charter" or "mission" won't change anything

(I work at OpenAI.)

The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst

◧◩
109. gdb+gs[view] [source] [discussion] 2019-03-11 18:50:22
>>chasea+1r
(I work at OpenAI.)

See my tweet about this: https://twitter.com/gdb/status/1105173883378851846

◧◩
112. gdb+At[view] [source] [discussion] 2019-03-11 18:57:27
>>rsp198+iq
The Nonprofit has full control, in a legally binding way: https://openai.com/blog/openai-lp/#themissioncomesfirst
◧◩
130. gdb+Tz[view] [source] [discussion] 2019-03-11 19:48:57
>>tschwi+Py
Our mission is articulated here and does not change: https://openai.com/charter/. As we say in the Charter, our primary means of accomplishing the mission is to build safe AGI ourselves. That means raising billions of dollars, without which the Nonprofit will fail at its mission. That's a huge amount of money and not something we could raise without changing structure.

Regardless of structure, it's worth humanity making this kind of investment because building safe AGI can return orders of magnitude more value than any company has to date. See one possible AGI application in this post: https://blog.gregbrockman.com/the-openai-mission#the-impact-...

◧◩◪◨
136. felipe+bB[view] [source] [discussion] 2019-03-11 20:00:02
>>komali+Gh
>>3. Robot takeover

This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.

https://en.wikipedia.org/wiki/Blame!_(film)

◧◩
140. gdb+7D[view] [source] [discussion] 2019-03-11 20:13:28
>>jpdus+ew
(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:

https://twitter.com/Miles_Brundage/status/110519043405200588...

Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.

◧◩◪
149. codeki+MF[view] [source] [discussion] 2019-03-11 20:30:16
>>wycs+hD
> (AGI) — which we define as automated systems that outperform humans at most economically valuable work — [0]

I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.

For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.

They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.

[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...

◧◩◪◨
162. brian_+0L[view] [source] [discussion] 2019-03-11 21:08:10
>>not_ai+tH
What makes you so sure that what you're doing isn't pattern recognition?

When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!

And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.

If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?

What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...

◧◩◪◨⬒
165. gdb+dP[view] [source] [discussion] 2019-03-11 21:39:05
>>Judgme+QH
Thanks for the thoughtful reply, it's much appreciated.

> you also want the mission statement for people that aren't motivated by money

I wouldn't agree with this — we want people who are motivated by AGI going well, and don't want to capture all of its unprecedentedly large value for themselves. We think it's a strong point that OpenAI LP aligns individuals' success with success of the mission (and if the two conflict, the mission wins).

We also think it's very important not to release technology we think might be harmful, as we wrote in the Charter: https://openai.com/charter/#cooperativeorientation. There was a polarized response, but I'd rather err on the side of caution.

Would love people who think like that to apply: https://openai.com/jobs

◧◩◪◨⬒
174. sriniv+IU[view] [source] [discussion] 2019-03-11 22:20:11
>>tomp+bP
Yes, I assume a magic/soul/etc. and I believe that the human brain is not stand-alone in creating intelligence. Check out this exciting video for discussion on how 'thinking' can happen outside brain. https://neurips.cc/Conferences/2018/Schedule?showEvent=12487
◧◩◪◨
181. Veedra+171[view] [source] [discussion] 2019-03-12 00:18:11
>>komali+Gh
As far as I can disentangle from [1], OpenAI posits only moderate superhuman performance. The profit would come from a variant where OpenAI subsumes much but not all of the economy and does not bring things to post-scarcity. The nonprofit would take ownership of almost all of the generated wealth, but the investments would still have value since the traditional ownership structure might be left intact.

I don't buy the idea myself, but I could be misinterpreting.

[1] https://blog.gregbrockman.com/the-openai-mission

◧◩◪◨⬒⬓⬔⧯
190. mises+Qq2[view] [source] [discussion] 2019-03-12 15:45:18
>>tedivm+N31
> The Supreme Court refused to overturn the appeal, which means they upheld the decision of the lower courts.

Completely incorrect. "The Court usually is not under any obligation to hear these [appealed] cases, and it usually only does so if the case could have national significance, might harmonize conflicting decisions in the federal Circuit courts, and/or could have precedential value. In fact, the Court accepts 100-150 of the more than 7,000 cases that it is asked to review each year." Source: https://www.uscourts.gov/about-federal-courts/educational-re...

The SC not hearing the case doesn't mean they uphold the lower court's ruling, it means they aren't hearing the case.

[go to top]