And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.
To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.
IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.
The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.
If they wanted to show they’re committed to backtracking they could resign themselves.
Now it sounds more like they want to have their cake and eat it.
Lmfao you're joking if you think they "realized their mistake" and are now atoning.
This is 99% from Microsoft & OpenAI's other investors.
That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.
Option B: try to fix mistakes as quickly as possible
.
This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).
Exactly. You can bet there have been some very pointed exchanges about this.
At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.
I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.
But everyone important does so who cares about the rest?
Oh wait, that's what OpenAI is.
(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)
It’s really dismissive toward the rank and file to think that they don’t matter at all.
b) Altman personally hired many of the rank and file.
c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.
In this case this person seems to have primarily tried and failed to spin a robotics company out of Singularity “university” in 2012.
This only sounds adjacent to AI if you work in Hollywood.
Investors care, but if new management can keep the gravy track, they ultimately won’t care either.
Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?
I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?
It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now
The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.
Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.
In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.
I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.
EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.
This. Some people even take it to the extreme and choose not to apologize for anything to look tough and smart.
I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.
The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.
That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.
Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.
And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.
What shocked me most was that Quora IMHO _sucks_ for what it is.
I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.
If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.
Microsoft could run the entire business as a loss just to attract developers to Azure.
"Disagree and commit."
- says every CEO these days
Everything just assumes that without Sam they’re worse off.
But what if, my gosh, they aren’t? What if innovation accelerates?
My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that
I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster
It's often a sign of incompetence though. Or rather a confirmation of it.
Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.
> but if new management can keep the gravy track
I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.
But it's not just him is it?
I am always curious how these conversations go in corporate America. I've seen them in the street and with blue collar jobs.
Loads of feelings get hurt and people generally don't heal or forgive.
I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating
I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:
1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.
2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.
It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster
The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.
Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.
I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.
Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.