Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.
Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.
I think this tracks with all the data we have.
There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.
If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.
In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.
If your goal is to make money. I'd like to believe that for some of the people pushing the field forward, there are other motivations.
But I bet that they have a ton of very talented people who’s values are more … common.
Ilya may be a singular talent, however.
It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.
I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.
It might be this is good at the end of the day. OpenAI is just not structured to win.
I don't think this is as damning as you think.
I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:
* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.
* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.
For-profit doesn't automatically mean non-virtuous.
There are non-wacky non-profits.
If you can't accept that, focus more on making money and less on developing something new.
> I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.
That's likely what OP is referring to as "wacky".
I hope that reporting is wrong.
Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?
Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.
> when you have the (perhaps) the most valuable product in the world.
Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."
Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.
Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.
EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.
>so people put up with his foolishness.
about Ilya. OP just implied, having ideals == being foolish. it is as close to calling a non-profit, wacky.
Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?
They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.
They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.
Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.
The OP was clearly implying not being solely focused on getting the highest bid is loony and wacky.
Which may be true, but let’s not pretend that’s not what they’re saying.
OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?
What if they shoot the moon? Is it really that unlikely?
It is an honour based system to clarify what you edited if it goes beyond typos/grammar.
Most probably GP used stronger words and then edited.
The point was fairly clear even if he just uses their names, the usage of quotes if quite bizarre.
If he wanted to get that point across he should have called them “board members” which is a clear insinuation of puppetry.
> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
- immediately calls their structure “strange” thanks to Ilya’s “idealism”.
- immediately calls him the “man” for his talents but a fool for what other than his ideals
- also labels Helen and Tasha (in quotes for some reason) as fools
- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive
Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.
First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.
And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.
Given the coherence of their post, I’d say they knew _exactly_ how they were using those quotes.
I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?
Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.
I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.
> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]
For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.
> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:
https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation
https://foundation.mozilla.org/en/what-we-fund/
> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.
Ultimately I'm not sure what the point you're trying to argue is.
The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.
They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.
closed commercial models => money => growth => faster to AGI in the right hands (theirs)
If they believe they are the best to control AGI