The Verge reported "that the action was meant to signal to the board who would leave OpenAI to follow Altman to a new company." - https://www.theverge.com/2023/11/19/23968027/why-openai-empl...
Your Board Sucks.
>She served as chief technology officer of OpenAI since 2018 and as interim chief executive officer of OpenAI over the weekend from November 17, 2023 - November 19, 2023.
-Her wiki
It's pretty basic that when you fire someone abruptly they _do not_ get to come back into the damn office.
An external CEO that's stepping in not because they want to but because they feel they "need" to doesn't sound like a recipe for success.
The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.
"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
He knows the reason, it's not safety, but he's not allowed to say what it is.
Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.
I must say though, going by his tweets, Andrej Karpathy isn't all too impressed with the Board. So, that's there too.
not sure why you got downvoted, but I am with you. I find all this is hilarious.
How many more times in my life am I going to have to sit and watch a non-profit board destroy a piece of software, stagnate a piece of software or fumble a market dominance position, you'd have think we'd have learnt from Mozilla that it just doesn't seem to work.
Vision, talent and accountability to success builds real change in technology, not the sort of at best navel gazing academics and at worst outright leeches who are attracted to non-profit boards.
Not to say that nonprofits are flawless, but they do seem to turn out stuff that's pretty important sometimes even when surrounded by for-profit competitors.
https://twitter.com/shengjia_zhao/status/1726543423824675154
The specific mix of profit and non-profit motives in this particular organization is confusing though, looking at it from the outside.
This is very interesting. You wouldn't normally hire an investigator to dig into a corporate shitshow. Firing of Sam Altman was a huge mess, but if there was a serious reason to do that (like, frauding the board), and that reason would be backed by investigation report, it'll suddenly make board actions much more justifiable. And it'll put Microsoft into a tough spot, because they hired Altman back without any considerations whatsoever...
Doesn't it rely on processing power from MS who they've just royally pissed off and who's in the process of grabbing all the exiting talent to presumably build a competitor?
Of course, no one really believes the probability of doom is that high.
People confabulate just like AI does. Just because the stories don’t add up doesn’t mean somebody deliberately lied.
But he just got the job and I'm sure many people are on PTO/leave for holidays. Give the guy some time. (And this is coming from someone who is pretty bearish on OpenAI going forward, just think it's fair to Shear)
Really sounds to me like he was just the only one available on a short notice.
They should have picked Elon for extra meme points at least.
I don't see how he can turn this around, but kudos for trying and communicating openly about the short term plan.
It will? Microsoft is their GPU provider, and ChatGPT is already rate-limited. GTM team is openly lamenting the Custom GPT product as dead in the water and they're still employed by OpenAI.
Emmet is just doing the smartest thing to salvage the situation.
but ...
Can you explain what is meant by the word safety?
Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?
A for-profit corporation has accountability to its shareholders. If you're a shareholder, you can sell your shares. If enough people are willing to sell their shares at a low enough price, someone will come long and buy a majority of the shares and take over the company, maybe liquidating it. If something egregious enough happens, you might even be able to sue the company or the officers for a breach of fiduciary duty. Either way, the way to stay employed is to make money. If you make enough money, purely money-interested people will be willing to buy the shares of those who have other interests for a high price. For-profit corporations have an incentive to be as good as possible at making money.
A not-for-profit theoretically has accountability to the board, who aren't really accountable to anyone. They can be sued, but only if they do an extremely and formally bad job. The only thing that weeds out bad non-profits is donors.
It seems like there needs to be another type of organization, that can have objectives other than making money, where market forces still cause it to do as good a job as possible at that mission.
What could the reason be that would justify this kind of wait?
I'll point out that Sam also doesn't seem to want to say the reason (possibly he's legally forbidden?). And all of the people following him out of OpenAI don't know, and are simply trusting him enough to be willing to leave without knowing.
In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.
You're saying this while the top thread on HN at the same time is how Firefox is artificially slowed down on Youtube lol. That is why they lost market share. Not because they lack the incentive to shove ads in your face, but because companies that run the internet also run Firefox's competition.
Firefox is objectivele fine. Linux is fine, openAI as a research institute would be fine. They aren't stagnant, they're being gutted or undermined by competitors that will not see them succeed.
The real loss is taken by the other OpenAI investors.
"AI: I am sorry, I can't provide this information."
And who looks out for all of the other stakeholders who don't own shares?
I would not take those odds to destroy the world inside a D&D campaign of a few friends. If that is really what they think they are building here...
> Was game of chicken until the very end. Only constant was board talking to just about no one.
https://twitter.com/ashleevance/status/1726469367565619590
They called Sam bluff, using the talk to bring him back just to get enough time to replace Mira. They made the decision to sacrifice OpenAI as it once was.
but of course the question captivates the peanut gallery so there's a certain importance to it. so making this empty promise costs nothing, hence it's there.
The best explanation I've seen is that Ilya is ok with commercializing the models themselves to fund AGI research but that the announcement of an app store for Laundry Buddy type "GPTs" at Dev Day was a bridge too far.
AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).
Or maybe more dystopian…
AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.
> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.
> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.
https://stratechery.com/2023/openais-misalignment-and-micros...
Edit: >>38348010 | https://twitter.com/karaswisher/status/1726598360277356775 (550 of 700 Employees OpenAI tell the board to resign)
If the investigator finds the board was wrong, it makes the new CEO an enemy of the board. If the investigator finds Sam Altman did bad things, it makes Microsoft look bad and incompetent for hiring him; MS is OpenAI biggest client.
And if, as is likely, the investigator finds some blame here and there, and nothing conclusive, nobody's better off and a lot of time and energy was spent learning nothing.
What good can possibly come from this?
The investigation would also include checking if the board had some odd intentions by ousting Sam; so, no, not necessarily.
If there is misconduct from Sam - he will get fired. If he succeeds - MS will benefit from whatever success means.
On the other hand, other competitors won't be able to porch Sam at this point. This is something many do not get. Whatever happens in the next few weeks to month, it won't hurt MS and won't benefit others, while on the outside we play the corporate compliance game.
OpenAI definitely needs professional structures. And MS will help them achieve that.
seems to me as if Microsoft's lawyers are the ones in over their head
they invested billions in an entity that has no power that is controlled by another entity they have no control over
This was worth 10B no doubt.
Emmet Shear presumably had very little to do with the prior events, but if he's going to salvage the company, he needs to control the narrative around the governance. That means either publicly justifying the boards actions or changing the board.
His title on LinkedIn is just "Member of Technical Staff". https://www.linkedin.com/in/shengjia-zhao/
Yes, see below.
> Whose interests are the OpenAI board representing?
OpenAI has a weird charter which mandates the board to uphold a fiduciary duty not to the shareholders but rather to being "broadly beneficial". This is very uncommon. It means that the board is fiscally required to uphold safety above all else; if they don't, the board members could get sued. The most likely person to fund such a lawsuit would be Elon, who donated a lot of money to the non-profit side of OpenAI.
Here's the OpenAI page which explains this unique charter: https://openai.com/our-structure Excerpt: “each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial”
If you are a customer, arrange to use alternative services. (It's always a good idea to not count one flaky vendor with a habit of outages and firing CEOs.)
If you are just eating popcorn, me too, pass the bowl.
but... it's a happy accident of new imagery. I imagine "porching" someone could mean to poach them... and then sit them on a nice rocking chair on the front porch, give them some lemonade, and make sure they do absolutely nothing.
(Kind of like how big companies acquire small, yet competitive, companies for no other reason than to put them out of business.)
I don't think Microsoft wants to "porch" Sam in this case, but they are happy to poach him and put him to work.
Well, in the US you can have a benefit corporation, or else certification as a "B Corp", which I just learned are different things while googling it to put a link here. Previously my impression was that "B Corp" was a legal status, but that's wrong, it's a certification by a nonprofit. In the US, a benefit corporation has a separate legal status as "a type of for-profit corporate entity whose goals include making a positive impact on society."
Both are kind of a niche thing still. I've seen a few "B Corp" logos among sustainable food companies, like King Arthur flour.
https://en.m.wikipedia.org/wiki/B_Corporation_(certification...
> Wikimedia Foundation
Please look into the financials of what these organizations actually spend their money on. You're just making huge assumptions that "non-profit = good" and that view of the world will soon come crashing down when you look into those two entities.
As you read the Mozilla ones please repeat to yourself out loud "They had 32% percent of the market share in 2010", just to really drive it home.
I'm not an AI researcher, but I know what a neural network is, I've implemented a machine learning algorithm or two, and I can read a CS paper. Once the luddite cult murdering AI researchers was dead or imprisoned, I suspect the demand for mediocre self-taught AI researchers would be increased and I might be motivated to become one.
If you somehow managed to destroy all copies of the best recent research, there are still many people with enough general knowledge of the techniques used who aren't currently working in the field to get things back to the current level of technology in under a decade given a few billion dollars to spend on it. Several of them are probably reading HN.
At this point we still have no idea what the outside board director's issue might have been but the fact that even their initial internal allies, co-founder/board member Ilya, CTO/interim CEO Mira and the COO, all stopped supporting the three outside directors after engaging directly with them over the weekend is pretty damning. Unfortunately, the scope, conduct and results of any such investigation are all entirely under the control of the three outside board members. The same outside directors that abruptly fired the last interim CEO and the CEO before her in a period of 48 hours. Unlike most boards, they aren't accountable to shareholders, investors, employees or anyone else.
It's mainly about who is allowed to control what other people can do, i.e. power.
Also, just because he is a recent grad doesn't mean he is not top talent. Age matters much less in AI/learning community than other areas.
Sure, sure...
But some general state of mind around all that AI's is strange. So WE-MUST... The AGI !!1
Money, yeah, some, but for did we do the same with eg. cars ? Public hearings, regulations, push AI chips into everybody phones before it is even working or phone companies have their own AI systems ?
And just some strange feeling: is OpenAI roleplaing Silicon Valley movie ? Even names are similiar, in places :)
If you were Iranian and said "I'm not a nuclear physicist, but I do know the math and I have built a small reactor." I would strongly suggest you be on the lookout for Mossad agents.