I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.
Was it a mistake to create OpenAI as a public charity?
Or was it a mistake to operate OpenAI as if it were a startup?
The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.
Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.
I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.
Lol HN lawyering is hilarious.
From https://openai.com/our-structure
- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.
Did you think OP meant there was some inherent conflict of interest with charities?
Explain how an MS employee would have greater conflict of interest.
https://www.irs.gov/charities-non-profits/charitable-organiz...
https://www.thecrimson.com/article/2023/5/5/epstein-summers-...
But all that seems a lot more like an episode of Succession and less like real life to be honest.
OpenAI is a charity nonprofit, in fact.
> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.
OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.
And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.
Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.
Good for them.
OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.
OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)
What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.
Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.
Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.
There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.
In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.
Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth
Were you watching a different show than the rest of us?
I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.
Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly
A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.
(But yes; what you describe is absolutely happening left and right...)
I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.
“Do you know where you are right now?”
There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.
> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive
No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.
But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.
Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.
I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.
Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.
I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.
I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.
Is this still true when the board gets overhauled after trying to uphold the moral compass.
If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.
We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.
Instead it often sounds like “it’s very unusual for the front to fall off”.
Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.
> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.
But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.
The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.
Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.
The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.
https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation
There are other similar examples like Ikea.
But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.
How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?
> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.
The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?
It’s sickening.
Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.
If OpenAI is struggling to hard with the corporate alignment problem, how are they going to tackle the outer and inner alignment problems?
I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.
Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...