I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.
One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?
Would you folks think about publishing your docs?
We've made the equity grants feel very similar to startup equity — you are granted a certain number of "units" which vest over time, and more units will be issued as other join employees in the future. Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.
Let's not forget that Khosla himself does not exactly care about public interest or existing laws https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...
Is this due to long term capital gains? Do you allow for early exercising for employees? Long term cap gains for options require holding 2 years since you were granted the options and 1 year since you exercised.
But I think it's not so morally wrong that OpenAI should not do business with him since they have the security mechanisms and limit his power. He just an grumpy old guy who does not want to share his beach.
I just read the article, and am not sure I see the issue. Quote from his lawyer: “No owner of private business should be forced to obtain a permit from the government before deciding who it wants to invite onto its property"
Where's the issue here? They guy basically bought the property all around the beach, and decided to close down access. I wouldn't say it's a nice thing to do, but it's legal. If I buy a piece of property, my rights as the owner should trump the rights of a bunch of surfers who want to get to a beach. The state probably should have been smart enough not to sell all the land.
Failing that, just seize a small portion via eminent domain: a 15-foot-wide strip on the edge of the property would likely come at a reasonable cost, and ought to provide an amicable resolution for all.
Since I'm not a lawyer, can you help understand the theoretical limits of the LP's "lock in" to the Charter? In a cynical scenario, what would it take to completely capture OpenAI's work for profit?
If the Nonprofit's board was 60% people who want to break the Charter, would they be capable of voting to do so?
He was also completely aware of this when he bought the property, so it's not like this is a surprise or someone forcing him to change things. He's the one who broke the law and broke the status quo that had existed at that beach.
Was this announced before or is this the first time they've mentioned it?
Was it a particular event, a conversation, perhaps just an incremental ideation without any actual epiphany needed, etc?
The courts already decided agaist him, he just can affort to pay the fine and continue restricting access
Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]
https://blog.ycombinator.com/updates-from-yc/
And TechCrunch had a source last Friday as well that said Altman intended to become CEO of OpenAI:
https://techcrunch.com/2019/03/09/did-sam-altman-make-yc-bet...
I think all of us here are tired of "altruistic" tech companies which are really profit mongers in disguise. The burden is on you all to prove this is not the case (and this doesn't really help your case).
I agree this isn't a non-profit any more. It seems like that's the goal: they want to raise money the way they'd be able to as a normal startup (notably, from Silicon Valley's gatekeepers who expect a return on investment), without quite turning into a normal startup. If the price for money from Silicon Valley's gatekeepers is a board seat, this is a safer sort of board seat than the normal one.
(Whether this is the only way to raise enough money for their project is an interesting question. So is whether it's a good idea to give even indirect, limited control of Friendly AI to Silicon Valley's gatekeepers - even if they're not motivated by profit and only influencing it with their long-term desires for the mission, it's still unclear that the coherent extrapolated volition of the Altmans and Khoslas of the world is aligned with the coherent extrapolated volition of humanity at large.)
> OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.
One of the key reasons to incorporate as a PBC is to allow "maximizing shareholder value" to be defined in non-monetary terms (eg impact to community, environment, or workers).
How is this structure different from a PBC, or why didn't you go for a PBC?
Not his beach. That's the point.
- Fiduciary duty to the charter - Capped returns - Full control to OpenAI Nonprofit
LP's have much more flexibility to write these in an enforceable way.
So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
'We're connecting everyone'
I admire the will to want to do things differently, but the lack of self awareness of some drinking the koolaid bothers me - I don't know the word for it exactly.
A 100x cap on returns is simply not a non-profit, point blank. Any investor, with any risk profile, in any industry, at any stage - would be super happy with a 100x return.
I also don't doubt the 'market reality' of needing to have such a situation in order to attract investment, I mean, it'd be very hard to bring in money without providing some return ....
... but this is called 'capitalism'.
i.e. the reality of the market, risks etc. are forcing their hand into a fairly normative company profile, with some structural differences which facilitate mostly PR spin.
Companies are not 'for profit' because they are inherently greedy, it's just a happy equilibrium for most scenarios: you need investors? Well, they want risk-adjusted returns.
Also implicit in the 100x capped returns is the implication that the company gets to keep the money! The money goes into more staff, capex, higher prices to suppliers, or lower prices to consumers. But within that framework is just massive payouts to employees as well, in a very cynical take.
I love the motivation, but I wish we would be more objective in terms of 'what things really are' these days. Photos of 'families with babies' don't make a company 'better'. We all have families that we love, workmates we like (or not sometimes).
More like: If we make enough money to own the whole world, we'll give you some food not to starve.
This is equivalent to saying:
"If you put 10m$ into us for 20% of the post-money business, anything beyond a 5B$ valuation you don't see any additional profits from" which seems like a high but not implausible cap. I suspect they're also raising more money on better terms which would make the cap further off.
The Nonprofit would fail at this mission without raising billions of dollars, which is why we have designed this structure. If we succeed, we believe we'll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.
As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.
Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?
"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."
"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."
This undoubtedly then applies the 5th amendment takings clause: “…nor shall private property be taken for public use, without just compensation.” This is clearly violated in this sense, and the state cannot violate this right (see above).
The fact that the Supreme Court didn't grant cert probably means they believe there is already precedent here, or just as probably that they didn't have the time. They always have a full docket; they were probably just out of slots.
I urge others to rebut this from a legal sense, not just say they disagree. People keep killing my comments, but it seems like they all just dislike the "selfish" appearance of the actions.
You are helping the planet if those customers would've bought ICE luxury vehicles instead of BEV luxury vehicles. I'm not sure BEV could be done any other way but a top-down, luxury-first approach. So, what exactly is your gripe there? Are you a climate change denier or do you believe that cheap EVs were the path to take?
>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission
Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.
1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.
2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.
3. Robot takeover. Money, and humanity, is gone.
Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?
> That sounds like the delusion of most start-up founders in the world.
huh? are you disputing that AGI would create unprecedented amounts of value?
"any existing company" only implies about $1T of value. that's like 1 year of indonesia's output. that seems low to me, for creating potentially-immortal intelligent entities?
Also, what are the consequences for failing to meet the goals. "We commit to" could really have no legal basis depending on the prevailing legal environment.
Reading pessimistically I see the "we'll assist other efforts" as a way in which the spirit the charter is apparently offered in could be subverted -- you assist a private company and that company doesn't have anything like the charter and instead uses the technology and assistance to create private wealth/IP.
Being super pessimistic, when the Charter organisation gets close a parallel business can be started, which would automatically be "within 2 years" and so effort could then -- within the wording of the charter -- be diverted in to that private company.
A clause requiring those who wish to use any of the resources of the Charter company to also make developments available reciprocally would need to be added.
Rather like share-alike or other GPL-style license that require patent licensing to the upstream creators.
Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?
Likewise, also from the OpenAI charter:
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.
______________________________
In one (I can't find the title), MIT students make an SGAI and somehow manage to keep it contained (away from the internet). They feed it animated Disney movies and it cranks out the best animated movies ever made. They make billions. Eventually they make "live-action" movies that are indistinguishable from the real thing. Then they make music, books, etc, and create an unstoppable media force.
They could leverage the AI to discover hyper-efficient supply chain methods.
They could sequence genomes and run experiments, and sell the data.
Possibly exciting things around weather prediction.
Very exciting things around any research.
Some companies to compare with:
- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date
- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date
> you don't want to "unduly concentrate power"? How will this work?
Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.
(I work at OpenAI.)
The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst
I'm assuming these investors have already provided capital.
Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?
EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.
Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?
But there are precedents for investing billions of dollars into blue sky technologies and still being able to spread the wealth and knowledge gathered - it's called government investment in science - it has built silicon chips and battery technologies and ... well quite a lot.
Is this company planning on "fundamental" research (anti-adversarial, "explainable" outcomes?) - and why do we think government investment is not good enough?
Or, worryingly, are the major tech leaders now so rich that they can honestly taken on previous government roles (with only the barest of nods to accountability and legal obligation to return value to the commons)
I am a bit scared that it's the latter - and even then this is too expensive for any one firm alone.
The OpenAI staff are literally some of the most employable folks on earth; if they have a problem with the new mission it's incredibly easy for them to leave and find something else.
Additionally, I think there's a reason to give Sam the benefit of the doubt. YC has made multiple risky bets that were in line with their stated mission rather than a clear profit motive. For example, adding nonprofits to the batch and supporting UBI research.
Their's nothing wrong with having a profit motive or using the upsides of capitalism to further their goals.
=====
Who’s involved
* OpenAI Nonprofit’s board consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley.
* Elon Musk left the board of OpenAI Nonprofit in February 2018 and is not formally involved with OpenAI LP. We are thankful for all his past help.
* Our investors include Reid Hoffman’s charitable foundation and Khosla Ventures, among others. We feel lucky to have mission-aligned, impact-focused, helpful investors!
Now with OpenAI leaving the non-profit path the Charter content, fuzzy as it is, is 100% up for interpretation. It does not specify what "benefit of all" or "undue concentration of power" means concretely. It's all up for interpretation.
So at this point the trust that I can put into this Charter is about the same that I can put into Google's "Don't be evil"...
If no IP was sold to the new OpenAI LP because some or all of the IP created under the original nonprofit OpenAI was open sourced, will the new OpenAI LP continue that practice?
Grammar--would change to: as broad an impact
See my tweet about this: https://twitter.com/gdb/status/1105173883378851846
University of Toronto
But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.
So, what does that commitment mean?
If an application benefits some people and harms others, is it unacceptable? What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?
Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?
What is the line?
How will OpenAI do that?
Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...
You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.
You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?
This change seems to be about ease of raising money and retaining talent. My question is: are you having difficulty doing those things today, and do you project having difficulty doing that in the foreseeable future?
I'll admit I'm skeptical of these changes. Creating a 100x profit cap significantly (I might even say categorically) changes the mission and value of what you folks are doing. Basically, this seems like a pretty drastic change and I'm wondering if the situation is dire enough to warrant it. There's no question it will be helpful in raising money and retaining talent, I'm just wondering if it's worth it.
Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?
Regardless of structure, it's worth humanity making this kind of investment because building safe AGI can return orders of magnitude more value than any company has to date. See one possible AGI application in this post: https://blog.gregbrockman.com/the-openai-mission#the-impact-...
And they are unironically talking about creating AGI. AGI is awesome of course but maybe that is a tiny little bit overconfident ?
The reason we care about slavery is because it is bad for a conscious being, and we have decided that it is unethical to force someone to endure the experience of slavery. If there is no conscious being having experiences, then there isn't really an ethical problem here.
This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.
Also what makes you believe that Open AI will get there way ahead of thousands of other research labs?
I think this tweet from one of our employees sums it up well:
https://twitter.com/Miles_Brundage/status/110519043405200588...
Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.
If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.
I'd like to offer up an alternate opinion: non-profits operating models are generally ineffective compared to for-profit operating models.
There are many examples.
* Bill Gates is easy; make squillions being a merciless capitalist, then turn that into a very productive program of disease elimination and apparently energy security nowadays.
* Open source is another good one in my opinion - even when they literally give the software away, many of the projects leading their fields (eg, Google Chrome, Android, PostgreSQL, Linux Kernel) draw heavily on sponsorship by for-profit companies using them for furthering their profits - even if the steering committee is nominally non-profit.
* I have examples outside software, but they are all a bit complicated to type up. Things like China's rise.
It isn't that there isn't a place for researchers who are personally motivated to do things, there is a just a high correlation between something making a profit and it getting done to a high standard.
For those (including myself) who wonder whether a 100x cap will really change an organization from being profit-driven to being positive-impact-driven:
How could we improve on this?
One idea is to not allow investors on the board. Investors are profit-driven. If they're on the board, you'll likely get pressure to do things that optimize for profit rather than for positive impact.
Another idea is to make monetary compensation based on some measure of positive impact. That's one explicit way to optimize for positive impact rather than money.
I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.
For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.
They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.
[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...
Will never work in practice
I just feel like you're trying to have the best of both worlds. You want the hypergrowth startup that attracts talent and investors, but you also want the mission statement for people that aren't motivated by money. I suspect trying to maintain this middle ground will be an incredibly damaging factor moving forward, as the people who are purely profit driven will look elsewhere and the people who are truly mission driven will also look elsewhere.
I fully appreciate how challenging these things can be and that making these decisions aren't trivial and obviously you're trying to do something different...but I really think you're taking the wrong step here. But because you will see short term gains from all the extra initial investment you won't realize it for years until it's too late and the culture has permanently shifted.
That said, while my trust has somewhat eroded (also because you wouldn't release details of your model recently), I still wish you luck in your mission.
When you lean a language, aren't you just matching sounds with the contexts in which they're used? What does "love" mean? 10 different people would probably give you 10 different answers, and few of them would mention that the way you love your apple is pretty distinct from the way you love your spouse. Though, even though they failed to mention it, they wouldn't misunderstand you when you did mention loving some apple!
And it's not just vocabulary, the successes of RNNs show that grammar is also mostly patterns. Complicated and hard to describe patterns, for sure, but the RNN learns it can't say "the ball run" in just the same way you learn to say "the ball runs", by seeing enough examples that some constructions just sound right and some sound wrong.
If you hadn't heard of AlphaGo you probably wouldn't agree that Go was "just" pattern matching. There's tactics, strategy(!), surely it's more than just looking at a board and deciding which moves feel right. And the articles about how chess masters "only see good moves"? Probably not related, right?
What does your expensive database consultant do? Do they really do anything more than looking at some charts and matching those against problems they've seen before? Are you sure? https://blog.acolyer.org/2017/08/11/automatic-database-manag...
Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).
For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.
If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.
> you also want the mission statement for people that aren't motivated by money
I wouldn't agree with this — we want people who are motivated by AGI going well, and don't want to capture all of its unprecedentedly large value for themselves. We think it's a strong point that OpenAI LP aligns individuals' success with success of the mission (and if the two conflict, the mission wins).
We also think it's very important not to release technology we think might be harmful, as we wrote in the Charter: https://openai.com/charter/#cooperativeorientation. There was a polarized response, but I'd rather err on the side of caution.
Would love people who think like that to apply: https://openai.com/jobs
This "cooperative" ostensibly elects its board. In reality, nomination by existing members of the REI board is the only way to stand for election by the REI membership, and when you vote you only by marking "For" the nominated candidates (there's no information on how to vote against, though at another time they indicated that the alternative was "Withold vote"). While the board members don't earn much, there is a nice path from board member to REI executive ... which can pay as much as $2M/year for the CEO position.
The shape of resultant word strings indeed form patterns. However, matching a pattern is, in fact, different than being able to knowledgeably generate those patterns so they make sense in the context of a human conversation. It has been said that mathematics is so successful because it is contentless. This is a problem for areas that cannot be treated this way.
Go can be described in a contentless (mathematical) way, therefore success is not surprising (maybe to some it was).
It is those things that cannot be described in this manner where 'AGI' (Edit: 'AGI' based on current DL) will consistently fall down. You can see it in the datasets....try to imagine creating a dataset for the machine to 'feel angry'. What are you going to do....show it pictures of pissed off people? This may seem like a silly argument at first, but try to think of other things that might be characteristic of 'GI' that it would be difficult to envision creating a training set for.
If you have AGI, it is very clear you could very quickly displace the entire economy, especially as inference is much cheaper than training: which implies there will be plenty of hardware available at the time AGI is created.
These claims are certainly plausible to me, but they are by no means obvious. (For reference, I'm a postdoc who specializes in machine learning.)
Pattern recognition works when there is a pattern (repetitive structure). But in the case of outliers, there is no repetitive structure and hence there is no pattern. For example, what is the pattern when a kid first learns 1+1=2? or why must 'B' come after 'A'? It is taught as a rule(or axiom or abstraction) using which higher level patterns can be built. So, I believe that while pattern recognition is useful for intelligence, it is not all there is to intelligence.
I have found Quantum ideas and observations too unnerving to accept a finite and discretized universe.
Edit: this in in response to GO, or Starcraft or anything that is boxed off -- these AIs will eventually outperform humans on a grand scale, but the existence of 'constants' or being in a sandbox immediately precludes the results from speaking to AI's generalizability.
I don't think I disagree very much with you then.
>People thought the same thing about nuclear energy. A popular quote from the 50s was "energy too cheap to meter." Yet here we are in a world were nuclear energy exists but unforeseen factors like hardware costs cause the costs to be much more than optimists expected.
In worlds where this is true, OpenAI does not matter. So I don't really mind if they make a profit.
Or to put it another way, comparative advantage dulls the claws of capitalism such that it tends to make most people better off. Comparative advantage is much, much more powerful than most people think. But nonetheless, in a world where software can do all economically-relevant tasks, then comparative advantage breaks, at least for human workers and the Luddite fallacy becomes a non-fallacy. At this point, we have to start looking at evolutionary dynamics instead of economic ones. An unaligned AI is likely to win in such a situation. Let's call this scenerio B.
OpenAI has defined the point at which they become redistributive in the low hundreds of billions. In worlds where they are worth less than hundreds of billions (scenerio A, which is broadly what you describe above) they are not threatening so I don't care - they will help the world as most capitalist enterprises tend to, and most likely offer more good than the externalities they impose. And as in scenerio A, they will not have created cheap software that is capable of replacing all human capital, comparative advantage will work its magic.
In worlds where they are worth more, scenerio B, they have defined a plan to become explicitly redistributive and compensate everyone else, who are exposed to the extreme potential risks of AI, with a fair share of the extreme potential upsides. This seems very, very nice of them.
And should the extreme potentials be unrealizable then no problem.
This scheme allows them to leverage scenerio A in order to subsidize safety research for scenerio B. This seems to me like a really good thing, as it will allow them to compete with organizations, such as FaceBook and Baidu, that are run by people who think alignment research is unimportant.
I honestly don't understand how you can take that action and try to turn it around the way you are.
Since you seem to be answering questions in this thread, here's one:
How does OpenAI LP's structure differ from that of a L3C (Low-profit Limited Liability company)?
I don't buy the idea myself, but I could be misinterpreting.
However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)
Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.
Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..
You guys raised free money in forms of grants, acquired the best talent in the name of a non-profit that has a purpose of saving humanity, and always had publicity stunts that is actually hurting science and the AI community, and talking the first steps against reproducibility by not releasing gpt2 so you can further commercialize your future models.
Also, you guys claim that the non-profit board retains full control, but seems like the same 7 white men on that board are also on the board of your profit company and have a strong influence there.
Call it what you want, but I think this was planned out from day one. Now, you guy won the game. It's just a matter of time to dominate the AI game, keep manipulating us, and appear on the Forbes list.
Also, I expect that you guys will dislike that comment instead of having an actual dialogue and discussion.
Between the market pressures from investors, employees, competitors, to what extent can a company really stay true to its business and deny potential profit that conflicts with it.
Also, it’s hard to root for specific for profit companies (although I’m rooting for capitalism per se).
We can just look at Google and see that “do no evil” does not work when you’ve got billions of dollars and reach into everyone’s private lives.
Completely incorrect. "The Court usually is not under any obligation to hear these [appealed] cases, and it usually only does so if the case could have national significance, might harmonize conflicting decisions in the federal Circuit courts, and/or could have precedential value. In fact, the Court accepts 100-150 of the more than 7,000 cases that it is asked to review each year." Source: https://www.uscourts.gov/about-federal-courts/educational-re...
The SC not hearing the case doesn't mean they uphold the lower court's ruling, it means they aren't hearing the case.
Similarly, what's stopping an investor from implicit control by threat of removing their investment?
It could be true that every complex problem solving system is conscious, and in that case maybe there are highly unintuitive conscious experiences, like being a society, or maybe it is an extremely specific type of computation that results in consciousness, and then it might be something very particular to humans.
We have no idea whatsoever.
Your arguments seem to also apply to humans, and clearly humans have figured out how to be intelligent in this universe.
Or maybe you're saying that brains are taking advantage of something at the quantum level? Computers are unable to efficiently simulate quantum effects, so AGI is too difficult to be feasible?
I admit that's possible, but it's a strong claim and I don't see why it's more likely than the idea that brains are very well structured neural networks which we're slowly making better and better approximations of.
Given this track record, I have learned to be suspicious of that part of my brain which reflexively says "no, I'm doing something more than pattern matching"
It sure feels like there's something more. It feels like what I do when I program or think about solutions to climate change is more than pattern matching. But I don't understand how you can be so sure that it isn't.
Greg, would you please elaborate more on this part of your tweet? Also, can the OpenAI LP commercialize work/research produced by OpenAI non-profit? Can you use grants that were raised by the non-profit into recruiting for the LP?
Thanks for talking questions and engaging in conversations to make things clear for our community.