So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
As described in our Charter (https://openai.com/charter/): that mission is to ensure that AGI benefits all of humanity.
Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?
"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."
"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."
>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission
Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.
1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.
2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.
3. Robot takeover. Money, and humanity, is gone.
Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?
> That sounds like the delusion of most start-up founders in the world.
huh? are you disputing that AGI would create unprecedented amounts of value?
"any existing company" only implies about $1T of value. that's like 1 year of indonesia's output. that seems low to me, for creating potentially-immortal intelligent entities?
Also, what are the consequences for failing to meet the goals. "We commit to" could really have no legal basis depending on the prevailing legal environment.
Reading pessimistically I see the "we'll assist other efforts" as a way in which the spirit the charter is apparently offered in could be subverted -- you assist a private company and that company doesn't have anything like the charter and instead uses the technology and assistance to create private wealth/IP.
Being super pessimistic, when the Charter organisation gets close a parallel business can be started, which would automatically be "within 2 years" and so effort could then -- within the wording of the charter -- be diverted in to that private company.
A clause requiring those who wish to use any of the resources of the Charter company to also make developments available reciprocally would need to be added.
Rather like share-alike or other GPL-style license that require patent licensing to the upstream creators.
Edit: Remove Oxford, because originally I was making a full list .. but then realized I couldn't remember which Canadian school was the AI leader.
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?
Likewise, also from the OpenAI charter:
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.
______________________________
Some companies to compare with:
- Stripe Series A was $100M post-money (https://www.crunchbase.com/funding_round/stripe-series-a--a3... Series E was $22.5B post-money (https://www.crunchbase.com/funding_round/stripe-series-e--d0...) — over a 200x return to date
- Slack Series C was $220M (https://blogs.wsj.com/venturecapital/2014/04/25/slack-raises...) and now is filing to go public at $10B (https://www.ccn.com/slack-ipo-heres-how-much-this-silicon-va...) — over 45x return to date
> you don't want to "unduly concentrate power"? How will this work?
Any value in excess of the cap created by OpenAI LP is owned by the Nonprofit, whose mission is to benefit all of humanity. This could be in the form of services (see https://blog.gregbrockman.com/the-openai-mission#the-impact-... for an example) or even making direct distributions to the world.
Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?
EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.
Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?
University of Toronto
You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.
You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?
Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?
The reason we care about slavery is because it is bad for a conscious being, and we have decided that it is unethical to force someone to endure the experience of slavery. If there is no conscious being having experiences, then there isn't really an ethical problem here.
This is an interesting take of what could happen if humans loose control of such AI system [1]. [spoiler alert] The interesting part is that it isn't that the machines have revolted, but rather that from their point of view their masters have disappeared.
I don't buy the idea myself, but I could be misinterpreting.
It could be true that every complex problem solving system is conscious, and in that case maybe there are highly unintuitive conscious experiences, like being a society, or maybe it is an extremely specific type of computation that results in consciousness, and then it might be something very particular to humans.
We have no idea whatsoever.