I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.
This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.
And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.
There's an idealistic bunch of people that think this was the best thing to happen to OpenAI, time will tell but I personally think this is the end of the company (and Ilya).
Satya must be quite pissed off and rightly so, he gave them big money, believed in them and got backstabbed as well; disregarding @sama, MS is their single largest investor and it didn't even warrant a courtesy phone call to let them know of all this fiasco (even thought some savants were saying they shouldn't have to, because they "only" owned 49% of the LLC. LMAO).
Next bit of news will be Microsoft pulling out of the deal but, unlike this board, Satya is not a manchild going through a crisis, so it will happen without it being a scandal. MS should probably just grow their own AI in-house at this point, they have all the resources in the world to do so. People who think that MS (a ~50 old company, with 200k employees, valued at almost 3 trillion) is now lost without OpenAI and the Ilya gang must have room temperature IQs.
I would imagine that if you based hiring and firing decisions on the metric of 'how often this employee tweets' you could quite effectively cut deadwood.
With that in mind...
Yes, agreed, but on _twitter_?
The massive_disgruntled_engineer_rant does have a lot of precedent but I've never considered twitter to be their domain. Mailing lists, maybe.
If you're an employee at OpenAI there is a huge opportunity to leave and get in early with decent equity at potentially the next giant tech company.
Pretty sure everyone at OpenAI's HQ in San Francisco remembers how many overnight millionaires Facebook's IPO created.
Follow-up: Why is only some fraction on Twitter?
This is almost certainly a confounder, as is often the case when discussing reactions on Twitter vs reactions in the population.
What other places are there to engage with the developer community?
Literally the literal definition of 'selection bias' dude, like, the pure unadulterated definition of it.
Come on. “By 5 pm everyone will quit if you don’t do x”. Response: tens of heart emojis.
But also, if you're a cutting edge researcher, do you want to stay at a company that just ousted the CEO because they thought the speed of technology was going too fast (it's sounded like this might be the reason)? You don't want to be shackled when by the organization becoming a new MIRI.
If the CEO of my company got shitcanned and then he/she and the board were feuding?
... I'd talk to my colleagues and friends privately, and not go anywhere near the dumpster fire publicly. If I felt strongly, hell, turn in my resignation. But 100% "no comment" in public.
Which likely most of the company was working on.
But people in AI/learning community are very active on twitter. I don't know every AI researcher on OpenAIs payroll. But the fact that most active researchers (looking at the list of OpenAI paper authors, and tbh the people I know, as a researcher in this space) are on twitter.
Of course, OpenAI as a cloud-platform is DoA if Sam leaves, and that's a catastrophic business hit to take. It is a very bold decision. Whether it was a stupid one, time will tell.
It was a question of whether they'd leave OpenAI and join a new company that Sam starts with billions in funding at comparable or higher comp. In that case, of course who the employees are siding with matters.
On twitter != 'active on twitter'
There's a biiiiiig difference between being 'on twitter' and what I shall refer to kindly as terminally online behaviour aka 'very active on twitter.'
It's created huge noise and hype and controversy, and shaken things up to make people "think" they can be in on the next AI hype train "if only" they join whatever Sam Altman does now. Riding the next wave kind of thing because you have FOMO and didn't get in on the first wave.
It’s a signal. The only meaning is the circumstances under which the signal is given: Sam made an ask. These were answers.
I also wasn't being facetious. If there are other places to share work and ideas with developers online, I'd love to hear about them!
But they will.
There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.
Work is work. If you start being emotional about it, it's a bad, not good, thing.
Give me a break. Apple Watch and Air pods are far and away leaders in their category, Apple's silicon is a huge leap forward, there is innovation in displays, CarPlay is the standard auto interface for millions of people, while I may question the utility the Vision Pro is a technological marvel, iPhone is still a juggernaut (and the only one of these examples that predate Jobs' passing), etc. etc.
Other companies dream about "coasting" as successfully.
As soon as one person becomes more important than the team, as in the team starts to be structured around said person instead of with the person, that person should be replaced. Because otherwise, the team will not be functioning properly without the "star player" nor is the team more the sum of its members anymore...
Just as another perspective.
You can disagree. You can say only explicit non-emoji messages matter. That’s ok. We can agree to disagree.
By what metric? I prefer open hardware and modifiable software - these products are in no way leaders for me. Not to mention all the bluetooth issues my family and friends have had when trying to use them.
You just need to temper that before you start swearing oaths of fealty on twitter; because that's giving real Jim Jones vibes which isn't a good thing.
The example of Steve Jobs used in the above post is probably a prime example - Apple just wouldn’t be the company it is today without that period of his singular vision and drive.
Of course they struggled after losing him, but the current version of Apple that has lived with Jobs and lost him is probably better than the hypothetical version of Apple where he never returned.
Great teams are important, but great teams plus great leadership is better.
It doesn't matter if it's large, unless the "very active on twitter" group is large enough to be the majority.
The point is that there may be (arguably very likely) a trait AI researchers active on Twitter have in common which differentiates them from the population therefore introducing bias.
It could be that the 30% (made up) of OpenAI researchers who are active on Twitter are startup/business/financially oriented and therefore align with Sam Altman. This doesn't say as much about the other 70% as you think.
I rarely see a professor or PhD student voicing a political viewpoint (which is what the Sam Altman vs Ilya Sutskever debate is) on their Twitter.
It's what kind of got it achieved. Because every other company didn't really see the benefit of going straight to AGI, instead working on incremental addition and small iteration.
I don't know why the board decided to do what it did, but maybe it sees that OpenAI was moving away from R&D and too much into operations and selling a product.
So my point is that, OpenAI started as a charity and literally was setup in a way to protect that model, by having the for-profit arm be governed by the non-for-profit wing.
The funny thing is, Sam Altman himself was part of the people who wanted it that way, along with Elon Musk, Illya and others.
And I kind of agree, what kind of future is there here? OoenAI becomes another billion dollar startup that what? Eventually sells out with a big exit?
It's possible to see the whole venture as taking away from the goal set out by the non for profit.
A lot of researchers like to work on cutting edge stuff, that actually ends up in a product. Part of the reason why so many researchers moved from Google to OpenAI was to be able to work on products that get into production.
> Particularly with a glorified sales man > Sounds like they aren't spending enough time actually working. Lmao I love how people come down to personal attacks on people.
Seems like a bit of a commercial risk there if the CEO can 'make' a third of the company down tools.
I have no idea what the actual proportion is, nor how investors feel about this right now.
The true proportion of researchers who actively voice their political positions on twitter is probably much smaller and almost certainly a biased sample.