Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced
> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
The board just vaporised the tender offer, and likely much of their valuation. It’s hard to have confidence in that.
Says who? And did they resign?
I won’t be surprised if it’s the open arms of Microsoft. Microsoft embraced and extended OpenAI with their investment. Now comes the inevitable.
Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)
The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.
From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake
Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.
Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.
While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?
It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.
Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.
Of course no one knows for sure except the people directly involved here.
Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit
Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired
Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues
Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.
We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.
Insisting, no matter how painful, that the organization stays true to the charter could be considered a desirable trait for the board of a non-profit.
Instead of "Sam has been lying to us" it could have been "Sam had diverged too far from the original goal, when he did X."
They could have meant that Sam had 'not been candid' about his alignment with commercial interests vs. the charter.
The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?
The IRS will know soon enough if they were indeed non-profit.
Basically the board's choices are commit seppuku and maybe be viable somewhere else down the line, or try to play hardball and fuck your life forever.
It's not really that hard a choice, but given the people who have to make it, I guess it kinda is...
Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.
If the whole process had been more regular, they could have removed Altman with little drama.
If he's reinstated, then that's it, AI will be used to screw us plebs for sure (fastest path to evil domination).
If he's not reinstated, then it would appear the board acted in the nick of time. For now.
For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.
EDIT: Correction:
Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.
OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.
This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.
Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.
https://www.theguardian.com/technology/2023/nov/18/earthquak...
OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.
Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.
What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.
That's a funny use of the word truce.
"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."
It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.
If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.
Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.
Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.
On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.
https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...
Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.
If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.
The tech industry (and US society generally) is quite hypocritical on this point.
* Greg wasn’t fired, just removed from the board, after which he chose to resign.
All that tough talk means doodly-squat.
OpenAI does not have an associative body, to my knowledge.
This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.
Now Google produces garbage, but not in 2004.
I’d guess this sort of narcissist behavior is what got him canned to begin with. Good riddance.
Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.
The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.
Clicking an ad is not the only way it is monitized.
What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.
Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)
All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.
In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.
IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.
My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.
Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.
I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).
If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”
Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.
Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.
Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.
Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)
I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.
If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.
As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...
Certainly not when they won.
They were better. Basic PageRank was better than anything else. And once they figured out advertisement, they kept making it better to seal their dominance.