Let's not forget that Khosla himself does not exactly care about public interest or existing laws https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...
I just read the article, and am not sure I see the issue. Quote from his lawyer: “No owner of private business should be forced to obtain a permit from the government before deciding who it wants to invite onto its property"
Where's the issue here? They guy basically bought the property all around the beach, and decided to close down access. I wouldn't say it's a nice thing to do, but it's legal. If I buy a piece of property, my rights as the owner should trump the rights of a bunch of surfers who want to get to a beach. The state probably should have been smart enough not to sell all the land.
Failing that, just seize a small portion via eminent domain: a 15-foot-wide strip on the edge of the property would likely come at a reasonable cost, and ought to provide an amicable resolution for all.
https://blog.ycombinator.com/updates-from-yc/
And TechCrunch had a source last Friday as well that said Altman intended to become CEO of OpenAI:
https://techcrunch.com/2019/03/09/did-sam-altman-make-yc-bet...
The Nonprofit would fail at this mission without raising billions of dollars, which is why we have designed this structure. If we succeed, we believe we'll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.
As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.
This undoubtedly then applies the 5th amendment takings clause: “…nor shall private property be taken for public use, without just compensation.” This is clearly violated in this sense, and the state cannot violate this right (see above).
The fact that the Supreme Court didn't grant cert probably means they believe there is already precedent here, or just as probably that they didn't have the time. They always have a full docket; they were probably just out of slots.
I urge others to rebut this from a legal sense, not just say they disagree. People keep killing my comments, but it seems like they all just dislike the "selfish" appearance of the actions.
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?
Likewise, also from the OpenAI charter:
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.
______________________________
Some companies to compare with:
- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date
- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date
> you don't want to "unduly concentrate power"? How will this work?
Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.
(I work at OpenAI.)
The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst
See my tweet about this: https://twitter.com/gdb/status/1105173883378851846
Regardless of structure, it's worth humanity making this kind of investment because building safe AGI can return orders of magnitude more value than any company has to date. See one possible AGI application in this post: https://blog.gregbrockman.com/the-openai-mission#the-impact-...
This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.
I think this tweet from one of our employees sums it up well:
https://twitter.com/Miles_Brundage/status/110519043405200588...
Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.
If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.
I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.
For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.
They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.
[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...
When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!
And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.
If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?
What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...
> you also want the mission statement for people that aren't motivated by money
I wouldn't agree with this — we want people who are motivated by AGI going well, and don't want to capture all of its unprecedentedly large value for themselves. We think it's a strong point that OpenAI LP aligns individuals' success with success of the mission (and if the two conflict, the mission wins).
We also think it's very important not to release technology we think might be harmful, as we wrote in the Charter: https://openai.com/charter/#cooperativeorientation. There was a polarized response, but I'd rather err on the side of caution.
Would love people who think like that to apply: https://openai.com/jobs
I don't buy the idea myself, but I could be misinterpreting.
Completely incorrect. "The Court usually is not under any obligation to hear these [appealed] cases, and it usually only does so if the case could have national significance, might harmonize conflicting decisions in the federal Circuit courts, and/or could have precedential value. In fact, the Court accepts 100-150 of the more than 7,000 cases that it is asked to review each year." Source: https://www.uscourts.gov/about-federal-courts/educational-re...
The SC not hearing the case doesn't mean they uphold the lower court's ruling, it means they aren't hearing the case.