- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products
- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs
- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well
---
Sam Altman's Leadership
- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight
- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned
---
Board Issues
- The board acted incompetently and destructively without clear reasons or communication
- The new board seems more reasonable but may struggle to govern given Sam's power
- There are still opposing factions on ideology and commercialization that will continue battling
---
Employee Motivations
- Employees followed the money trail and Sam to preserve their equity and careers
- Peer pressure and groupthink likely also swayed employees more than principles
- Mission-driven employees may still leave for opportunities at places like Anthropic
---
Safety vs Commercialization
- The safety faction lost this battle but still has influential leaders wanting to constrain the technology
- Rapid commercialization beat out calls for restraint but may hit snags with model issues
---
Microsoft Partnership
- Microsoft strengthened its power despite not appearing involved in the drama
- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity
Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.
I don't personally like him, but I must admit he displayed a lot more leadership skills than I'd recognize before.
It's inherently hard to replace someone like that in any organization.
Take Apple, after losing Jobs. It's not that Apple was a "weak" organization, but really Jobs that was extraordinary and indeed irreplaceable.
No, I'm not comparing Jobs and Sam. Just illustrating my point.
What makes this "likely"?
Or is this just pure conjecture?
Which might have an oversight from AMZN instead of MSFT ?
Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement
It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.
Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?
Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?
Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?
Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.
The comment you responded to made neither of those claims, just that they were "involved".
Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.
Whether or not that's "better" or "stronger" is up to individual interpretation.
https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
Example: Put a loser as CEO of a rocket ship, and there is a huge chance that the company will still be successful.
Put a loser as CEO of a sinking ship, and there is a huge chance that the company will fail.
The exceptional CEOs are those who turn failures into successes.
The fact this drama has emerged is the symptom of a failure.
In a company with a great CEO this shouldn’t be happening.
"A cult follower does not make an exceptional leader" is the one you are looking for.
Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]
> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.
[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:
https://www.statista.com/statistics/1219257/us-employment-ra...
All these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of: 1. PR pieces being pushed by unknown entities 2. positive endorsements from well known people who are likely know him
Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.
Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?
The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).
First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.
But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.
I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.
Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???
There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.
Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."
They're very orthogonal things.
Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.
The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back
+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).
Having no leadership at all guarantees failure.
Still a good deal, but your accounting is off.
I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.
I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.
I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]
> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed
[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...
Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet
It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.
Toner got her board seat because she was basically Holden Karnofsky's designated replacement:
> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.
> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".
There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)
1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...
To that end, observing unanimous behavior may imply some bias.
Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.
I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.
There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.
The logic being that if any opinion has above X% support, people are choosing it based on peer pressure.
So clearly this wasn't a 50/50 coin flip.
The question at hand is whether the skew against the board was sincere or insincere.
Personally, I assume that people are acting in good faith, unless I have evidence to the contrary.
I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.
(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)
He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.
https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...
I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.
In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.
But future signees are influenced by previous signees.
Acting in good faith is different from bias.
"Failure" in this context essentially means arriving at a materially suboptimal outcome. Leaders in this situation, can easily be considered "irreplaceable" particularly in the early stages as decisions are incredibly impactful.
> it seems that Helen was picked by Holden to take his seat.
So you can only speculate as to how she got the seat. Which is exactly my point. We can only speculate. And it's a question worth asking, because governance of America's most important AI company is a very important topic right now.