https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Unless Brockman was involved, though, firing Brockman doesn't really make sense.
There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.
Open AI- we need clarity on your new direction.
1) a confirmation of the dates of employment
2) a confirmation of the role/title during employment
3) whether or not they would rehire that person
... and that's it. The last one is a legally-sound way of saying that their time at the company left something to be desired, up to and including the point of them being terminated. It doesn't give them exposure under defamation because it's completely true, as the company is fully in-charge of that decision and can thus set the reality surrounding it.
That's for a regular employee who is having their information confirmed by some hiring manager in a phone or email conversation. This is a press release for a company connected to several very high-profile corporations in a very well-connected business community. Arguably it's the biggest tech exec news of the year. If there's ulterior or additional motive as you suggest, there's a possibility Sam goes and hires the biggest son-of-a-bitch attorney in California to convince a jury that the ulterior or additional motive was _the only_ motive, and that calling Sam a liar in a press release was defamation. As a result, OpenAI/the foundation, would probably be paying him _at least_ several million dollars (probably a lot more) for making him hard to hire on at other companies.
Either he simply lied to the board and that's it, or OpenAI's counsel didn't do their job and put their foot down over the language used in the press release.
It not like to can just move to another AI company if you don't like their terms.
Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.
We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.
Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.
Edit: Ok seems to be a joke account. I guess I’m getting old.
First thing tomorrow I'm kicking off another round of searching for alternatives.
Even with very public cases of company leaders who did horrible things (much worse than lying), the companies that fired them said nothing officially. The person just "resigned". There's just no reason open up even the faintest possibility of an expensive lawsuit, even if they believe they can win.
So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.
I wouldn't put money on the last one, though.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.
That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.
That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.
Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
[source: https://twitter.com/karaswisher/status/1725702501435941294]
Sounds like you exactly predicted it.
Of course the press release is under scrutiny, we are all wondering What Really Happened. But careless statements create significant legal (and thus financial) risk for a big corporate entity, and board members have fiduciary responsibilities, which is why 99.99% of corporate communications are bland in tone, whatever human drama may be taking place in conference rooms.
I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.
I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.
You also wouldn't try to avoid a lawsuit if you believed (hypothetically) it was impossible to avoid a lawsuit.
Dang! He left @elonmusk on read. Now that's some ego at play.
Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.
>I'm not patronizing you
(A)ssuming (G)ood (F)aith, referring to someone online by their name, even in an edge case where their username is their name, is considered patronizing as it is difficult to convey a tone via text medium that isn't perceived as a mockery/veiled threat.
This may be a US-internet thing; analogous to getting within striking distance with a raised voice can be a capital offense in the US, juxtaposed to being completely normal in some parts of the Middle East.
How is the language “we are going our separate ways” compared with “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI” going to have a material difference in the outcome of the action of him getting fired?
How do the complainants show a judge and jury that they were materially harmed by the choice of language above?
And this time around he would have the sympathies from the crowd.
Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.
The war between OpenAI and Sam AI is just the beginning
Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).
About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.
OpenAI's board's press release could very easily be construed as "Sam Altman is not trustworthy as a CEO", which could lead to his reputation being sullied among other possible employers. He could argue that the board defamed his reputation and kept him from what was otherwise a very promising career in an unfathomably lucrative field.
This will end up being a blip that corrects once it’s actually digested.
Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.
> with enough time and copies of itself.
Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.
This has to be a joke, right?
Really they should have just said something to the effect of, "The board has voted to end Sam Altman's tenure as CEO at OpenAI. We wish him the best in his future endeavors."
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.
Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.
You're assuming they even consulted the lawyers...
In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.
Unless OpenAI can prove in a court of law that what they said was true, they're on the hook for that amount in compensation, perhaps plus punitive damages and legal costs.
I recognize that the above para sort of sounds like I think I have some authority to mediate between them, which is not true and not what I think. I'm just replying to this side conversation about how to be polite in public, just giving my take.
The broad pattern here is that there are norms around how and when you use someone's name when addressing them, and when you deviate from those norms, it signals that something is weird, and then the reader has to guess what is the second most likely meaning of the rest of the sentence, because the weird name use means that the most likely meaning is not appropriate.
All this to say that the board is probably unlike the boards of the vast majority of tech companies.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
When you're a public person, the bar for winning a defamation case is very high.
The commenter above doesn't mean that any reference to someone else by name ("Sam Altman was fired") is patronizing.
1) The comments are meant to be read by all, not just the author. If you want to email the author directly and start the message with a greeting containing their name ("hi jrockway!"), or even just their name, that's pretty normal.
2) You don't actually know the person's first name. In this case, it's pretty obvious, since the user in question goes by what looks like <firstname><lastname>. But who knows if that's actually their name. Plenty of people name their accounts after fictional people. It would be weird to everyone if your HN comment to darthvader was "Darth, I don't think you understand how corporate law departments work." Darth is not reading the comment. (OK, actually I would find that hilarious to read.)
3) Starting a sentence with someone's name and a long pause (which the written comma heavily implies) sounds like a parent scolding a child. You rarely see this form outside of a lecture, and the original comment in question is a lecture. You add the person's name to the beginning of the comment to be extra patronizing. I know that's what was going on and the person who was being replied to knows that's what was going on. The person who used that language denies that they were trying to be patronizing, but frankly, I don't believe it. Maybe they didn't mean to consciously do it, but they typed the extra word at the beginning of the sentence for some reason. What was that reason? If to soften the lecture, why not soften it even more by simply not clicking reply? It just doesn't add up.
4) It's Simply Not Done. Open any random HN discussion, and 99.99% of the time, nobody is starting replies with someone's name and a comma. It's not just HN; the same convention applies on Reddit. When you use style that deviates from the norm, you're sending a message, and it's going to have a jarring effect on the reader. Doubly jarring if you're the person they're naming.
TL;DR: Don't start your replies with the name of the person you're replying to. If you're talking with someone in person, sure, throw their name in there. That's totally normal. In writing? Less normal.
I don’t like this whole development one bit, actually. He lost his brakes and I’m sure he doesn’t see it this way at all.
Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.
lol
> "that's just crazy".
why is it crazy? the purpose of OpenAI is not to make investors rich - having investors on the board trying to make money for themselves would be crazy.
Also, as long as you are a public person, defamation has a very high bar in the USA. It is not enough to for the statement to be false, you have to actually prove that the person you're accusing of defamation knew it was false and intended it to hurt you.
Note that this is different from an accusation of perjury. They did not accuse Sam Altman of performing illegal acts. If they had, things would have been very different. As it stands, they simply said that he hasn't been truthful to them, which it would be very hard to prove is false.
Though I would go further than that: if that is indeed the reason, the board has proven themselves very much incompetent. It would be quite incompetent to invite this type of shadow of scandal for something that was a fundamentally reasonable disagreement.
Surely, at some level, you can be sued for making unfounded remarks. But then IANAL so, meh.
Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.
So better be the first to set the narrative.
No, in the UK it's unambiguously the other way round. The complainant simply has to persuade the court that the statement seriously harmed or is likely to seriously harm their reputation. Truth is a defence but for that defence to prevail the burden of proof is on the defendant to prove that it was true (or to mount an "honest opinion" defence on the basis that both the statement would reasonably be understood as one of opinion rather than fact and that they did honestly hold that opinion)
If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable
Woz did two magic things just in the Apple II which no one else was close to: the hack for the ntsc color, and the disk drive not needing a completely separate CPU. In the late 70s that ability is what enabled the Apple II to succeed.
The point is Woz is a hacker. Once you build a system more properly, with pieces used how their designers explicitly intended, you end up with the Mac (and things like Sun SPARCstafions) which does not have space for Woz to use his lateral thinking talents.
We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.
That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.
It's foolish for any of us to peer inside the crystal ball of "what would Jobs be without Woz", but I think it is important to acknowledge that the Apple II and IIc pretty much bankrolled Apple through their pre-Macintosh era. Without those first few gigs (which Woz is almost single-handedly responsible for), Apple Computers wouldn't have existed as early (or successfully) as it did. Maybe we still would have gotten an iPhone later down the line, but that's frankly too speculative for any of us to call.
Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.
It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.
An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".