Edit: This was in reply to the prior, non-paywalled URL. Comment moved by dang.
Surely Emmett must be really capable but still seems a bit of wild card entry
Also if some of the wilder motives behind this shitshow are true then Microsoft will never see GPT-5.
Just like Anthropic spinning off from OpenAI brought some very cool research (particularly liked the functional introspection vs node based stuff), Altman going into his own venture to compete will help diversify the overall space.
And while many seem to be thinking this is going to bode poorly for OpenAI, I think perhaps long term it might be a good thing. Ilya is quite impressive and his having greater influence with Altman gone may result in them exploring more interesting avenues that have long term value which might not have been as prioritized with a focus on maximizing short term rollouts.
While the way the board handled it wasn't great optics, this is probably going to be an overall positive shakeup for AI and LLMs in 2024-2025.
I expect them to continue to be relevant, but just one of the chorus, no longer the leader.
Then when it becomes clear that negotiations won't solve the problem the board drops the hammer and lets the other side make their move. If they act professionally and like adults, well, maybe there's room for negotiation after all.
If, on the other hand, they do weird childish shit, well... guess not. In with the new guy!
I'm sure Quora views took a hit after ChatGPT. Not like Quora was any good before ChatGPT, they just managed to get to the top of Google results for a lot of common questions.
Now, Poe by Quora was trying to go big on custom agents. The GPT Agents announcement on DevDay was a fundamental threat to Poe in many ways.
I'm convinced that Adam D'Angelo probably had some influence on the other two board members too. He should've left the board of OpenAI the moment OpenAI and his own company were competing in the same space.
By cutting its other revenue streams OpenAI has lost all of its leverage.
So now will shrink to being effectively an external dev team for Bing / Windows.
If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?
Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.
Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.
No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
This industry is changing far too fast for the legal calendar.
By the time the lawsuits are resolved, it will all be irrelevant.
But at this point, we have Ilya who clearly has strong difference of opinions with Sam, Adam D'Angelo, whose company Poe has a conflict of interest with the existance of for profit OpenAI and their GPT store/customizable agents, and two other board members.
And the fact that Emmett is only interim should give you a hint something is up.
[1] Sorry - the full list is behind a paywall https://www.crunchbase.com/organization/openai/company_finan...
Or are they just going to focus on the hardware thing?
I would have been extremely uneasy if Altman had the power to get back as CEO after this kind of ousting but I didn't really support ousting him like that in the first place, especially if this didn't reach a fever pitch over some much more capable model.
Kyle's story was brewing from the moment GM appointed their attorney to manage Cruise - everyone knew there was gonna be restructuring of the executive team after the incident.
If anything, it was a convenient time for Kyle to step down as it wouldn't get a lot of prime time thanks to OpenAI drama.
I think one thing is for certain: OpenAI now, won't worth that much any more.
So questioning whether they will survive seems very silly and incredibly premature to me
Apple has rarely been first to market with technology. However they often are first to have a cohesive user experience that integrates it in.
Maybe Adam will not have any conflict in a few months, but he does now. He did when he and others got rid of Greg and Sam from the board. That's my point.
Is it though? "No outcome where [OpenAI] is one of the big five technology companies. My hope is that we can do a lot more good for the world than just become another corporation that gets that big." -Adam D'Angelo
It really is a massive change in direction and a lost opportunity to set their own course.
As best I can say regarding such a broad, poorly defined topic: Due to the brazen, distasteful, dangerous megalomania and seeing real consequences at Twitter and FTX, and others (Theranos, etc.), that brazen megalomania seems to have re-acquired the bad reputation that common sense has eternally given it, in every place and time.
Obviously, it's a ridiculous, self-destructive, dangerous idea. Whatever the cause, I'm glad people seem to be regaining their senses - but will they do it quickly and completely enough, will they be manipulated into blaming someone else (a massive risk in the post-truth world that accepts dis/misinformation), and will the megalomaniacs retain their attitudes and power, and then use it to suppress others.
https://news.microsoft.com/source/features/ai/openai-azure-s...
https://www.semafor.com/article/11/18/2023/openai-has-receiv...
> Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.
> That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.
> One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.
> Microsoft has certain rights to OpenAI’s intellectual property so if their relationship were to break down, Microsoft would still be able to run OpenAI’s current models on its servers.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
Surely that's what you need for safety?
Follow their dear saviour into a new venture or stay on a ship with their shares that will begin to sink at markets open.
Microsoft maybe holding a very huge bag. Probably will sue.
OP might have a point: if OpenAI declines, devs might prefer the Azure API over the OpenAI API for factors like stability, quality, response time, better integration with existing Azure stack, etc.
What a ridiculous end to this chapter of the story.
But I'm sure someone here knows the legal structure better than I do, I just quickly skimmed over
I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.
This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.
And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.
The board will bring in an adult CEO who can balance the nonprofit charter with Microsoft and the commercial business, and who doesn't have a million side projects taking his or her focus away. Some employees will leave but the vast majority will stay for the usual reasons (i.e. inertia), the business will keep growing because ChatGPT is already a worldwide brand at this point and the vast majority of users don't give a hoot about any of this palace intrigue as long as the product works.
And the board will ultimately be vindicated for acting as fiduciaries for the nonprofit's mission and bylaws -- and not for the financial interests of Satya Nadella, Vinod Khosla, and the like.
If you are CEO for a day do you get to wear a paper crown like at Burger King?
Thanks Sam we bluffed but no way we quiting
Did anyone believe that he seriously believed in that? I thought the consensus was he was angling for a government backed monopoly with OpenAI as the “steward” of large AI models.
There's an idealistic bunch of people that think this was the best thing to happen to OpenAI, time will tell but I personally think this is the end of the company (and Ilya).
Satya must be quite pissed off and rightly so, he gave them big money, believed in them and got backstabbed as well; disregarding @sama, MS is their single largest investor and it didn't even warrant a courtesy phone call to let them know of all this fiasco (even thought some savants were saying they shouldn't have to, because they "only" owned 49% of the LLC. LMAO).
Next bit of news will be Microsoft pulling out of the deal but, unlike this board, Satya is not a manchild going through a crisis, so it will happen without it being a scandal. MS should probably just grow their own AI in-house at this point, they have all the resources in the world to do so. People who think that MS (a ~50 old company, with 200k employees, valued at almost 3 trillion) is now lost without OpenAI and the Ilya gang must have room temperature IQs.
I would imagine that if you based hiring and firing decisions on the metric of 'how often this employee tweets' you could quite effectively cut deadwood.
With that in mind...
The smartest white hot startup on the planet has the smallest board and most inexperienced
How did that even happen on Sam’s watch?
My take: he always thought Ilya would have his back with Greg and the 3 overrule ruled anybody , so they kept it small
Bad idea
- Does not yet have a big model (need $$$ and months to train, if code is ready)
- Does not have proprietary code OpenAI has right now
- Does not have labeled data ($$$ and time) and chatgpt logs
- Does not have ChatGpt brand...
I haven't checked but I'm pretty sure OpenAI has many patents in the field and they won't be willing to share them with another company, especially with AkshuallyOpenAI.
But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.
I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.
Plus giant competitors like Google, Facebook might step in to fill the void.
The PR hit will be bad for a few days. Good time to buy MS stock on discount but this won't matter in a year or two.
It is to be seen if investors will likely pour another set of billions of dollars, just to catch up to speed with OpenAI, which, by that time, would have even further evolved.
There is a ray of hope that, as it so happens in this field, that old things are quickly obsolete and new things are the cutting edge, Sam Altman can convince investors to invest in the cutting edge with him. Then investors have a choice on an almost level field, to choose between people, companies and personalities, for a given outcome.
You may well think that the structure of the company and how the votes on the board work etc means that any motion you can carry at the board is within the sphere of things you can achieve i.e. that you have autonomy to do anything you can secure the votes for, but that's rarely the case. In actual practise people like the investors and other stakeholders can change the rules if they need to, exercise soft pressure, bring legal suits and do other things. Your autonomy to act is effectively pretty narrow.
However this finally plays out, whether Sam comes back or not, whether openAI's board changes, the people who orchestrated this have seriously damaged themselves and they will most likely have less of both authority and autonomy in future.
Yes, agreed, but on _twitter_?
The massive_disgruntled_engineer_rant does have a lot of precedent but I've never considered twitter to be their domain. Mailing lists, maybe.
It's failing to generate Python code - first time I've seen this in months of heavy usage.
If you're an employee at OpenAI there is a huge opportunity to leave and get in early with decent equity at potentially the next giant tech company.
Pretty sure everyone at OpenAI's HQ in San Francisco remembers how many overnight millionaires Facebook's IPO created.
Whether you're Team Sutzkaver or team Altman, you can't deny it's been interesting to see extremely talented people fundamentally disagree what to do with godlike technology.
Damn. I hope Sam guts the place and ClosedAI ends up as a "Lostech" fossil and M$ ends up holding the most expensive sack of poo ever conceived.
[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...
And guess what OpenAI will not be focusing on? Profit.
They are not gonna quit tomorrow. They will leave over time due to the low compensation.
If you look at Poe as a value add for existing Quora users, instead of a feature that is going to grow their userbase, it's still a net win for Quora even if GPT agents exist simultaneously.
Like the board presumably called her and said, "hey Sam is out, you're the CEO for now, more details to come"
And then in the next 48 hours she had a chance to talk to Sam and others and realize that she was on his side.
And I'm sure they're getting their money's worth. E.g. the last time I heard or thought of Bing (outside of the ChatGPT context) was years ago. Now I see it all the time. That's worth $$$$$ to them.
Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.
(like LeCun, I am not a doomer; but I am also not Hinton to know any better)
The default consequence of AGI's arrival is doom. Aligning a super intelligence with our desires is a problem that no one has solved yet.
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
----
Listen to Dwarkesh Podcast with Eliezer or Carl Shulman to know more about this.
- Nuke employee morale: massive attrition, not getting upside (tender offer),
- Nuke the talent magnet: who's going to want to work there now?
- Nuke Microsoft relationship: all those GPUs gone,
- Nuke future fundraising: who's going to fund this shit show?
Just doesn't make sense.https://twitter.com/karpathy/status/1726478716166123851
Going nuclear?
they had an interim CEO: why exactly do they need a new interim CEO? it's been a couple of weekend days, zero business days. Not taking sides, none of this makes any sense, so much drama.
hypothesis: there's too much money on the table, and inasmuch as some people care more about the public welfare than money, too much of that too (meaning, a lot of money in something usually means it's something important, something of value, and therefore not all bad; so this is all just people trying to steer things the way they want them to go rather than the way other people want them to go)
And no way Thrive, which was just about to buy up employee shares at $80-$90b, is going to go through with the tender offer now
Follow-up: Why is only some fraction on Twitter?
This is almost certainly a confounder, as is often the case when discussing reactions on Twitter vs reactions in the population.
What other places are there to engage with the developer community?
Poe is not really meant as a value addition for Quora users. Poe was a general AI chat company, like ChatGPT.
Poe's unique selling point was their 'chat agents with customizable instructions/personality' and they were charging people money for this while pretty much building on OpenAI GPT API. They also had an agents store.
During DevDay when Sam announced GPTAgents and store, that was a fundamental threat to Poes existance.
Also keep in mind that a year earlier in Spring 2017 Sam Altman led Quora's Series D, after YC previously joined in on Quora's Series C in 2014. So the two of them clearly had some pre-existing relationship.
I don't think OpenAI and Quora (the product) are a serious conflict of interest. You claim "I'm sure Quora views took a hit after ChatGPT" but I really doubt that's true in any meaningful way. Quora's struggles are a separate issue and predate the GPT craze of the last year.
Nor were Poe and OpenAI competitors until recently; Poe was simply building on top of OpenAI models, the same as hundreds of other ventures in the space right now.
However...I do agree that the GPTs announcement two weeks ago now creates a very clear conflict of interest--OpenAI is now competing directly against Poe. And because of that, I agree that Adam probably should leave the board.
The timing also raises the question of whether booting Sam is in any way related to the GPTs launch and to Poe. Perhaps Sam wasn't candid about the fact that they were about to be competing with Adam's company. The whole thing is messy and not a good look and exactly why you try to avoid these conflicts of interest to begin with.
Signed by Sam Altman, Ilya Sutskever, Yoshua Bengio, Geoff Hinton, Demis Hassabis (DeepMind CEO), Dario Amodei (Anthropic CEO), and Bill Gates.
The only thing the board has really done wrong is not communicating their reasons clearly.
No, that very much is the fear. They believe that by training AI on all of the things that it takes to make AI, at a certain level of sophistication, the AI can rapidly and continually improve itself until it becomes a superintelligence.
Literally the literal definition of 'selection bias' dude, like, the pure unadulterated definition of it.
Come on. “By 5 pm everyone will quit if you don’t do x”. Response: tens of heart emojis.
https://www.matthewgeleta.com/p/joscha-bach-ai-risk-and-the-...
I don't buy for a second that enough employees will walk to sink the company (though it could be very be disruptive). But for OpenAI, losing a big chunk of their compute could mean they are unable to support their userbase and that could permanently damage their market position.
But also, if you're a cutting edge researcher, do you want to stay at a company that just ousted the CEO because they thought the speed of technology was going too fast (it's sounded like this might be the reason)? You don't want to be shackled when by the organization becoming a new MIRI.
If the CEO of my company got shitcanned and then he/she and the board were feuding?
... I'd talk to my colleagues and friends privately, and not go anywhere near the dumpster fire publicly. If I felt strongly, hell, turn in my resignation. But 100% "no comment" in public.
Which likely most of the company was working on.
But people in AI/learning community are very active on twitter. I don't know every AI researcher on OpenAIs payroll. But the fact that most active researchers (looking at the list of OpenAI paper authors, and tbh the people I know, as a researcher in this space) are on twitter.
Curated datasets will rise in value… that’s the main cost of replicating OpenAI models, in my understanding
Microsoft will fall in line and do what makes sense once the dust settles, and that probably means continuing to work with OpenAI for the foreseeable future. Most of the employees, even if they supported Sam, will probably also remain until a better option truly appears, and it remains to be seen whether Sam will really open up a competitor and try to hire everyone.
I'd jump ship too if I were them.
Of course, OpenAI as a cloud-platform is DoA if Sam leaves, and that's a catastrophic business hit to take. It is a very bold decision. Whether it was a stupid one, time will tell.
By selecting Shear as the new interim CEO, the board signaled they weren't interested in Altman coming back.
It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.
It was a question of whether they'd leave OpenAI and join a new company that Sam starts with billions in funding at comparable or higher comp. In that case, of course who the employees are siding with matters.
What leverage does Microsoft does have over OpenAi? Can Microsoft shut off access to their hardware to support Altman? Why would Microsoft want this?
https://the-decoder.com/openai-lures-googles-top-ai-research....
So less like an alien invasion.
And more like a pandemic at the speed of light.
Another examples is the Be My Eyes data - presumably the vision part of GPT-4 was trained on the archive of data the blind assistance app has, and that could be an exclusive deal with OpenAI.
What we need at this point is a neutral 3rd party who can examine their safety claims in detail and give a relatively objective report to the public.
On twitter != 'active on twitter'
There's a biiiiiig difference between being 'on twitter' and what I shall refer to kindly as terminally online behaviour aka 'very active on twitter.'
Microsoft might as well hire the entire team leaving and give them a Dave Cutler team like deal - do what you folks want in San Fransico - full autonomy reporting directly to the CEO but and make us some god dammed money already.
All that compute credits from Microsoft - hasta la vista baby.
This is probably not something you want to hear as a researcher who is motivated by pushing the frontiers of our capabilities, nor as a researcher who is motivated by pushing their compensation.
I wouldn’t take over as CEO or interim-CEO unless I knew why the previous CEO was fired and was OK with the process and reasoning.
"Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity."
The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.
I think when people say "takeover" or "coup" it's because they want to convey their view of the moral character of events, that they believe it was an improper decision. But it muddies the waters and I wish they'd be more direct. "It's a coup" is a criticism of how things happened, but the substantive disagreements are actually about that it happened and why it happened.
I see lots of polarized debate any time something AI safety related comes up, so I just don't really believe that most people would feel differently if the same thing happened but the corporate structure was more conventional, or if Brockman's board seat happened to be occupied by someone who was sympathetic to ousting Altman.
Altman won't be starting a competing company. First, he may have contractual restrictions and second, OpenAI owns their IP. And even if Altman is somehow free to do what he wants because he was fired (doubtful), anyone who quits to go with him surely do have airtight non-competes. Besides, it's not like Sam and his few loyalists are just going to spin up a few servers and duplicate what OpenAI has done. He'll do something AI-adjacent like the chip startup he was rumored to be pursuing.
As for Microsoft, they have a contract with OpenAI and are deeply reliant on them at the moment. OpenAI isn't disappearing just because Sam and Greg aren't there. Nadella may not be happy with the change, but he'll just have to live with it. Nothing will change for the foreseeable future there either.
When it comes to lawsuits? Who knows, but I highly doubt Altman will fight, or if he does, it will be discreetly settled as it's in no one's interest to wage some protracted battle. Microsoft may want to renegotiate their deal, but that again most likely isn't going to be anything nasty, as Microsoft needs OpenAI right now.
As for developers and consumers of OpenAI's service? They won't care or notice for many months until whatever changes the new CEO and board have in mind are enacted.
Microsoft want OpenAI to do research, and give them the model to run on Azure.
They certainly don't need OpenAI competing in consumer products. ChatGPT could have been a Micrsoft product in an only slightly different timeline.
And they'd rather they didn't compete in API serving, they'd rather everyone currently using OpenAI had to shift to Azure.
I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.
Sam will get billions of dollars if he starts a new company. So there's no issue of money. In terms of data and training models, look at Anthropic - they did train a reasonable model. Heck look at Mistral, a bunch of ex Meta folks and their LLaMA team lead who spinned up a good models in months.
The only bottle neck i could think of would probably be RLHF data - but given enough money, that's not an issue either.
Money is exchanged for goods and services, like GPUs, hiring researchers and coders, and acquiring data.
No one is going to trust them. They will be able to use their whole 120M they got as donations to operate the company for a full week or two.
Good luck, twats.
(Maybe they have AGI/ASI in the basement. If so, kudos, they will be fine and it was classless to fire Sam)
Altman will be more than fine, he’ll get a bucket of money and the chance to prove he is the golden boy he’s been sold to the world. He will get to recruit a team that believes in his vision of accelerating AI for commercial use. This will lead to a more diverse market.
I hope for the best for those who remain at OpenAI. I hope for the best for Altman and Brockman.
Which does not say whether microsoft was open to the idea or ultimately chose to pursue that path.
It's created huge noise and hype and controversy, and shaken things up to make people "think" they can be in on the next AI hype train "if only" they join whatever Sam Altman does now. Riding the next wave kind of thing because you have FOMO and didn't get in on the first wave.
i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi
It’s a signal. The only meaning is the circumstances under which the signal is given: Sam made an ask. These were answers.
https://loeber.substack.com/p/a-timeline-of-the-openai-board
Related read: https://www.techtris.co.uk/p/openai-set-out-to-show-a-differ...
When I say alive, I mean it's like something to be that thing. The lights are on. It has subjective experience.
It seems many are defining ASI as just a really fast self learning computer. And while sure, given the wrong type of access and motive, that could be dangerous. But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.
If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.
I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.
(That's the religious text of the anti-AI cult that founded OpenAI. It's in the form of a very long Harry Potter fanfic.)
I also wasn't being facetious. If there are other places to share work and ideas with developers online, I'd love to hear about them!
But they will.
The amount of weight people here give to an _emoji_ on this site... Rampant, unnecessary, baseless speculation in every comment thread bout Altman.
Just wait til monday next time. These ultra wealthy over-privileged Worldcoin fucks are not worth this much attention.
There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.
Main problems stopping it are:
- no intelligent agent is motivated to improve itself because the new improved thing would be someone else, and not it.
- that costs money and you're just pretending everything is free.
The for-profit bender that OpenAI was on appears to have been more of the issue. It's one thing to create greater capabilities for enhancing the rate of research, and another rushing those capabilities to market.
Btw, I think it's funny how much credit Hinton gets for AI. His contribution is pretty much just keeping some grad students on the problem.
Work is work. If you start being emotional about it, it's a bad, not good, thing.
Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.
The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.
>Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.
https://www.theinformation.com/articles/dozens-of-staffers-q...
And non competes can be as airtight as you like.. they are completely unenforceable in California which is where OpenAI's HQ is based and where Sam Altman lives.
First the loyal will jump ship. Followed quickly by the mercenaries who see the chance to join a new rocket ship from the ground up. Then as OpenAI's shares tank on the secondary market the rest will decide they've seen enough of their paper money burn and cash out.
OpenAI will survive but its going to be a much smaller company and a much smaller valuation.
As for Microsoft I'm guessing one of the strings Nadella was pulling was threatening to revoke credits, resources and even use by his teams and I'm sure he would be interested in investing in whatever Altman starts next and dedicating those now spare machines to the new enterprise.
What do you imagine he could have done about the board of a non-profit as CEO and fellow board-member?
I think the more important question is: Is Sutskever interested in a model 5.0 anytime soon? If he really ousted Sam A. because he thought they moved "too fast" wouldn't he rather work on making 4.0 "more secure" (whatever that means) instead of producing a 5.0?
That they didn't complete the process of a permanent CEO over the weekend after firing and then negotiating with Altman?
Give me a break. Apple Watch and Air pods are far and away leaders in their category, Apple's silicon is a huge leap forward, there is innovation in displays, CarPlay is the standard auto interface for millions of people, while I may question the utility the Vision Pro is a technological marvel, iPhone is still a juggernaut (and the only one of these examples that predate Jobs' passing), etc. etc.
Other companies dream about "coasting" as successfully.
https://twitter.com/eshear/status/1703178063306203397?t=8nHS...
As for Microsoft, if they let OpenAI go, then what? Does Google pick them up? Elon? They are still looking to invent AGI, so I'd be surprised if no one wants to take advantage of that opportunity. I'd expect Microsoft to be aware of this and weigh into their calculus.
i've seen similar with the cloud credits thing, people just pontificating whether it's even a viable strategy.
As soon as one person becomes more important than the team, as in the team starts to be structured around said person instead of with the person, that person should be replaced. Because otherwise, the team will not be functioning properly without the "star player" nor is the team more the sum of its members anymore...
I think the second best thing after having technical knowledge is to recognize smart employees and then not get in their way...
> But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.
Seems to me that can be unboundedly dangerous? Like, I don't see you making an argument here that there's a limit to what kind of dangerous that class entails.
As board members, both Altman and Brockman would have presumably had to vote on any changes to the board - including reduction in number of members and appointment of new members.
Do you think the composition of the board before Friday could've been reached without some level of support from Altman and Brockman?
> Also lol "religious text", how dare people have didactic opinions.
That's not what a religious text is, that'd just be a blog post. It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.
Other than 1) Microsoft and 2) anyone building a product with the OpenAI api 3) OpenAI employees…
…is OpenAI crashing a burning a big deal?
This seems rather over hyped… everyone has an opinion, everyone cares because OpenAI has a high profile.
…but really, alternatives to chatGPT exist now, and most people will be, really… not affected by this in any meaningful degree.
Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?
Feels a lot like Twitter; people said it would crash and burn, but really, it’s just a bit rubbish now, and a bunch of other competitors have turned up.
…and competitive pressure is good right?
I predict: what happens will look a lot like what happened with Twitter.
Ultimately, most people will not be affected.
The people who care will leave.
New competitors will turn up.
Life goes on…
Maybe they created the new model and there's something interesting about it?
The only thing it damages about Altman is the credibility of him using his purported concern for AI safety as a PR lever for regulatory capture.
Just as another perspective.
Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.
California's non-compete laws don't cover "trade secrets", only your ability to use your expertise to pursue employment in your chosen field. In other words, if you're just a regular programmer, you can go work wherever you like or start a competing company. If you're a principal scientist or architect, you would be in danger of violating your contract. Anyone who went with Altman would presumably have deep inside knowledge of OpenAI's secret sauce and therefore be restricted.
You can disagree. You can say only explicit non-emoji messages matter. That’s ok. We can agree to disagree.
But if we're seeing the existence of an unaligned superintelligence, surely it's squarely too late to do something about it.
noncompetes are illegal in california
If I am good at hacking computers, it does not mean that I should hack all of them and get to the jail.
By what metric? I prefer open hardware and modifiable software - these products are in no way leaders for me. Not to mention all the bluetooth issues my family and friends have had when trying to use them.
You just need to temper that before you start swearing oaths of fealty on twitter; because that's giving real Jim Jones vibes which isn't a good thing.
The example of Steve Jobs used in the above post is probably a prime example - Apple just wouldn’t be the company it is today without that period of his singular vision and drive.
Of course they struggled after losing him, but the current version of Apple that has lived with Jobs and lost him is probably better than the hypothetical version of Apple where he never returned.
Great teams are important, but great teams plus great leadership is better.
Why didn't the board explain itself clearly?
There are times when saying anything publicly would be considered defamation and openining themselves to lawsuits, but it seems that they owe it to their own staff in plain words. They didn't explain the situation properly as per leaked internal announcements.
It doesn't matter if it's large, unless the "very active on twitter" group is large enough to be the majority.
The point is that there may be (arguably very likely) a trait AI researchers active on Twitter have in common which differentiates them from the population therefore introducing bias.
It could be that the 30% (made up) of OpenAI researchers who are active on Twitter are startup/business/financially oriented and therefore align with Sam Altman. This doesn't say as much about the other 70% as you think.
In a related note, has this meaningfully broken through to the mainstream yet? If a ChatGPT competitor comes out tomorrow that is just as good - but under a different brand - how many people will switch because it's Altman backed? I'll be curious to find out.
> That's not what a religious text is, that'd just be a blog post.
Yes, almost as if "Lesswrong is a community blog dedicated to refining the art of human rationality."
> It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.
I don't think anybody either asked somebody to, or actually did, donate all their money. As to "joining a cult group house polycule", to my knowledge that's just SF. There's certainly nothing in the Sequences about how you have to join a cult group house polycule. To be honest, I consider all the people who joined cult group house polycules, whose existence I don't deny, to have a preexisting cult group house polycule situational condition. (Living in San Francisco, that is.)
I also found it weird and cultish. I guess it's not so different than signing a birthday card going around the office tho. "Sorry you got fired, see you at the next burning man"
When someone runs a model in a reasonably durable housing with a battery?
(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)
Everyone is just reiterating that board is inept and trying to undermine them. This does not sit right with me.
The board's decisions may or may not turn out to be correct in hindsight. But it's very difficult to say that this was a good example of leadership or decision making.
The worry is not necessarily that the systems become "alive", though, we are already bad enough ourselves as a species in terms of motivation so machines don't need to supply the murderous intent: at any given moment there are at least thousands if not millions of people on the planet that would love nothing more than be able to push a button an murder millions of other people in some outgroup. That's very obvious if you pay even a little bit of attention to any of the Israel/Palestine hatred going back and forth lately. [There are probably at least hundreds to thousands that are insane enough to want to destroy all of humanity if they could, for that matter...] If AI becomes powerful enough to make it easy for a small group to kill large numbers of people that they hate, we are probably all going to end up dead, because almost all of us belong to a group that someone wants to exterminate.
Killing people isn't a super difficult problem, so I don't think you really even need AGI to get to that sort of an outcome, TBH, which is why I think a lot of the worry is misplaced. I think the sort of control systems that we could pretty easily build with the LLMs of today could very competently execute genocides if they were paired with suitably advanced robotics, it's the latter that is lacking. But in any case, the concern is that having even stronger AI, especially once it reliably surpasses us in every way, makes it even easier to imagine an effectively unstoppable extermination campaign that runs on its own and couldn't be stopped even by the people who started it up.
I personally think that stronger AI is also the solution and we're already too far down the cat-and-mouse rabbithole to pause the game (which some e/acc people believe as the main reason they want to push forward faster and make sure a good AI is the first one to really achieve full domination), but that's a different discussion.
There is nuance to this point.
One can ultimately still push the frontiers while also showing restraint - stop acting like it's either-or.
I rarely see a professor or PhD student voicing a political viewpoint (which is what the Sam Altman vs Ilya Sutskever debate is) on their Twitter.
Seems like a way to put the incentives and motivations of the staff at odds with the charter of the organisation.
Bloomberg, the verge and the information all went to bat for Altman in a big way on this.
It's what kind of got it achieved. Because every other company didn't really see the benefit of going straight to AGI, instead working on incremental addition and small iteration.
I don't know why the board decided to do what it did, but maybe it sees that OpenAI was moving away from R&D and too much into operations and selling a product.
So my point is that, OpenAI started as a charity and literally was setup in a way to protect that model, by having the for-profit arm be governed by the non-for-profit wing.
The funny thing is, Sam Altman himself was part of the people who wanted it that way, along with Elon Musk, Illya and others.
And I kind of agree, what kind of future is there here? OoenAI becomes another billion dollar startup that what? Eventually sells out with a big exit?
It's possible to see the whole venture as taking away from the goal set out by the non for profit.
Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)
OpenAI is not a typical LLC or S/C-corp though, so the Board also has to overcome that conceptual hurdle.
A lot of researchers like to work on cutting edge stuff, that actually ends up in a product. Part of the reason why so many researchers moved from Google to OpenAI was to be able to work on products that get into production.
> Particularly with a glorified sales man > Sounds like they aren't spending enough time actually working. Lmao I love how people come down to personal attacks on people.
Actually the exodus of talent from OpenAI may turn out to be beneficial for the development of AI by increasing competition - however it will certainly go against the stated goal of the board for firing Altman, which was basically keeping the development under control.
Now consider the training has caused it to have undesirable behavior (misaligned with human values).
> So, here's what happened at OpenAI tonight. Mira planned to hire Sam and Greg back. She turned Team Sam over past couple of days. Idea was to force board to fire everyone, which they figured the board would not do. Board went into total silence. Found their own CEO Emmett Shear
Written by the person who broke the story at Bloomberg.
So it appears a single person on the board wanted the talks to bring him back, and nobody else. I think that's 1 against 3, but the point is that the board wasn't totally united (which is not surprising).
Also AWS could be the sugar daddy.
No wonder why CEO got fired.
some important questions: - if Ilya had 4:2, why not just sit Sam down and work all this out in private? - why has the board been completely unable to explain themselves to OAI employees? to the public? - why not take a more neutral "parting ways" tone? - latest reporting suggests that the board is doing this without any outside council (legal or professional network). it seems absolutely bonkers to risk funding sources eg MSFT on a decision like this.
It looks like the current board (4 people?) can do absolutely whatever they want and they report to nobody? Nobody can replace them?
Second question: in a hypothetical situation that they all died, who and how would pick the new board? The CEO? What if the CEO died too? I think a case like this happened after some helicopter crash in Australia - where everyone entered the same helicopter and died.
His own sister, even if it's not true, it reflects poorly to have this kind of relationship with your sister that she'd say this, and if it's true, it's very problematic.
And for some reason, very very little mention of this. I just find it suspicious from a media behavior point of view.
This seems like a fairly unconventional idea in a capitalist world where Altman has made all his money on exactly that - equity.
Seems like it has ultimately bit him, the investors, and the board in the ass. Poor incentive structure in the corporate governance is going to really mess with people's heads.
That's kinda what happened. The latest gist I read was that the non-profit, idealistic(?) board clashed with the for-profit, hypergrowth CEO over the direction to take the company. When you read the board's bios, they're weren't ready for this job (few are; these rocket ship stories are rare), the rocket ship got ahead of their non-profit goals, and they found themselves in over their heads, then failed to game out how this would go over (poor communication with MS, not expecting Altman to get so much support).
From here, the remaining board either needs to either surface some very damning evidence (the memo ain't it) or step down and let MS and Sequoia find a new board (even if they're not officially entitled to do that). Someone needs to be saying mea culpa.
I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.
Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.
This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.
Sam is smart enough to use the safety group's fear against them. They finally clued in.
OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.
OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.
But there is an alternative scenario.. in order for Microsoft to avoid loosing any momentum they might offer Altman an insane amount of money to become the John Giannandrea of Microsoft and bring as many of his recent colleagues with him. And for Altman this might be the easiest way to not loose ground as well with Microsofts patents and license agreements.
Seems like a bit of a commercial risk there if the CEO can 'make' a third of the company down tools.
What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?
And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.
Not that I am encouraging the GP. I upvoted the “laaaaaaaawyer” comment.
I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.
If OpenAI ceases to be Sam’s vision someone will replace it.
It is a good thing for the ecosystem I guess, we will have more diverse products to choose.
But making AI more safe? Not likely. The tech will spread and Ilya will probably not a safer AGI, because he will not control it
This sounds, to me, like the company leadership want the ability to do some sort of picking of winners and losers, bypassing the electorate.
Again a stark reminder that all of these guys from Ray Dalio to the run of the mill SF VC are all just normal, twisted people who don't know much better about anything and merely had a run of good luck. Stop paying attention to them.
Even setting that aside for a second, that doesn't change my essential point that the board doesn't necessarily have all the autonomy it thinks it has. There are for sure repercussions to this - they may have to make concessions. Some of the seemingly committed funding may be unpaid and the donors may have the ability to invoke MAC clauses and similar to pull it. Even if that turns out not to be the case, the way this has played out will certainly affect decisions about future donations etc.
[1] https://www.theguardian.com/technology/2023/nov/20/sam-altma...
If they are able to retain enough people to properly release a GPT-5 with significant performance increases in a few months, I would assume that the effect is less pronounced.
Jumping to a different platform is a huge sacrifice for power users - those who create content and value.
None of this is a factor here. ChatGPT is just a tool, like an online image resizer.
"Dozens" sounds like about right amount for a large org.
Pine forests are known to grow by fires. Fires scatter the seeds around, the area which is unsustainable is reset, new forests are seeded, life goes on.
This is what we're seeing, too. A very dense forest has burned, seeds are scattered, new, smaller forests will start growing.
Things will slow down a bit, bit accelerate again in a more healthy manner. We'll see competition, and different approaches to training and sharing models.
Life will go on...
Fellow nerds, you really need to go into work on Monday and have a hard chat with your C levels and legal (Because IANAL). The question is: Who owns the output of LLM/AI/ML tooling?
I will give you a hint, it's not you.
Do you need to copyright what a CS agent says, no, you want them on script as much as possible. An LLM parroting your training data is a good thing (assuming a human wrote it). Do you want an LLM writing code, or copy for your product, or a song for your next corporate sing along (Where did you go old IBM)? No you dont, because it's likely going straight to the public domain. Depending on what your doing with the tool and how your using it, it might not matter that this is the case (its an internal thing) but M$, or openAI, or whoever your vendor is, having a copy that they are free to use might be very bad...
There's also no guarantee that Altman will really start a new company, or be able to collect funding to hire everyone quickly. I wonder if these people are just very loyal to Sam.
I think that illustrates it will be a be a big uphill battle for any new entrant no matter how well funded or resourced.
Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.
If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.
It is Sam Altman. He will have one in a week.
> It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.
I would imagine most employees at any organization are not really there because of corporate values, but their own interests.
> What does Altman bring to the table besides raising money from foreign governments and states, apparently?
And one of the world's largest tech corporations. If you are interested in the money side, that isn't something to take lightly.
So I would bet it is just following the money, or at least the expected money.
The new board also wants to slow development. That isn't very exciting either.
And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.
Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.
Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.
I'm assuming you meant "aren't" here.
> That would imply there was some arbitrary physical limit to intelligence
All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.
Also there's no guarantee the amount of raw computation is going to increase quickly.
In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.
I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.
I presume this Mira person wasn't totally freelancing -- how would this even end up being presented to the board without some direction from someone on the board. So maybe more like 3.5 against 0.5. It could have been a total flip flop, but that's a bigger assumption. I have no problem not assuming grand narratives until the basic reporting shakes out.
They didn't "bring" a hyper capitalist. Sam Co-founded this entire thing lol. He was there from the beginning.
Why, then, did they bring in THE CO-FOUNDER OF TWITCH to help them out? Seems like an off-brand thing to do for a group of people focused on "the mission".
The accusations are about events 25 years ago, when they were children. No one will ever be able to disprove this, so there's no way to undo the reputational damage.
I imagine you need to signal that you want in on the deal by departing. Get founder equity.
Don’t understand his thought process, especially after all the resignations. Does he really expect OpenAI to maintain its position especially with the threat of Microsoft and pretty much all other investors backing off?
Ironically, he might have been better off keeping Sam since he’d have some say in things. But if llama 4 beats gpt5? Zuck won’t even answer his calls.
Now turns out Linux is the workhorse everywhere for running workloads or consuming content. Almost every programming language (other than Microsoft's own SDKs) gets developed on Linux, has first class support for Linux and Windows is always an afterthought.
It has gone to that extent that to lure developers, Microsoft has to embed a Lunux in a virtual machine on Windows called WSL.
Local inference is going to get cheaper and affordable and that's for sure.
New models would also emerge.
So OpenAI doesn't seem to have an IP that can withstand all that IMHO.
This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.
Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.
https://metarationality.com/bayesianism-updating
Bit old but still relevant.
In the case of an LLM handing it to me can I sue MS or OpenAI for giving out that IP, or is it on me for not checking first? Is any of this covered in the TOS?
The world is filled with Sam Altmans, but surely not enough Ilya Sutskevers.
That would be fun case law to keep this soap opera going
I think academics have a general faith in goodwill of intelligence.Benevolence may be a convergent phenomenon. Maybe the mechanisms of reason themselves require empathy and goodness
The most recent case was notably in the Bahamas though.
Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.
But when you go beyond that, it becomes difficult to ignore trend lines.
Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman
OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.
I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.
This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.
The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?
https://x.com/esyudkowsky/status/1725630614723084627?s=46
Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.
But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.
I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.
He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.
"Investors were hoping that Altman would return to a company “which has been his life's work”"
As opposed to Sutskever, who they found on the street somehow, yeah?
Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).
The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.
> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.
Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.
This is kind of like the leadership of the executive branch switching parties. You're not going to say "why would the staff immediately quit?" Especially since this is corporate America, and sama can have another "country" next week.
CEOs should be judged by their vision for the company, their ability to execute on that vision, bringing in funding, and building the best executive team for that job. That is what Altman brings to the table.
You make it seem that wanting to make money is a zero-sum game, which is a narrow view to take - you can be heavily emotionally and intellectually invested in what you do for a living and wanting to be financially independent at the same time. You also appear to find it “disturbing” that people support someone that is doing a good job - there has always been a difference between marketing and operations, and it is rather weird you find that disturbing - and appreciate stability, or love working for a team that gets shit done.
To address your initial strawman, why would workers quit when the boss leaves? Besides all the normal reasons listed above, they also might not like the remaining folks, or they may have lost faith in those folks, given the epic clusterfuck they turned this whole thing into. All other issues aside, if I would see my leadership team fuck up this badly, on so many levels, i’d be getting right out of dodge.
These are all common sense, adult considerations for anyone that has an IQ and age above room temperature and that has held down a job that has to pay the bills, and combining that with your general tone of voice, I’m going to take a wild leap here and posit that you may not be asking these questions in good faith.
It's the AI era - VCs are going crazy funding AI startups. What makes you think Greg and Sam would have a hard time raising millions/billions and starting a new company in a week if they want to?
It's not quite where it is (or was) with Tesla, where it was hopeless to know what was sincere and what was just people talking up their investment/talking down their short, but it's getting there.
So that helped cut through all the cruft with this. There was a lot of effort behind putting across the perception that the board was going to resign and that Altman was going to come back.
Looked at through that lens, it makes more sense: the existing board had little incentive to quit and rehire Sam/Greg. The only incentive was if mass resignations threatened their priorities of working on safety and alignment, and I get the sense that most of these resignations are more on the product engineering side.
So I don't really think this is a twist that no one saw coming.
Yes, clearly overrated in terms of credit. Doing foundational work in the field going back to the 70s which laid the groundwork and inspired the resurgence of neural networks in the late 80s. Being a solid community organiser throughout his career and keeping neural network research alive through the more formal statistical methods dominating for over a decade. Supervising and thus raising many others who themselves contributed greatly to the explosion of neural network utility we have seen since around 2010 until now. Should I carry on?
I think it is absolutely clear that Hinton has contributed plenty enough to get a massive amount of credit for where we are today. The kind of mentality at display here is akin to ahistoricity on the level of saying that Gordon Moore "just started a company" after Apple released the M1 under the delusion that there is not a direct lineage between what we have today and breakthroughs and efforts in the past. Believe it or not, but we stand on the shoulders of giants and cutting them some slack is not the same as downplaying the impact of people more active in the present day; that are gradually becoming future giants.
i mean physically - i'm not suggesting anything to do with chest-bursters and the like.
Altman was trying to play this game in parallel with commercialization, which brings a whole pile of conflicting groups into the picture. People have utterly underestimated the depth of interest represented by the board.
It is highly amusing how many of the EA cult are on each side, and how both will portray whatever they are pursuing entirely for personal goals as for the greater good when in reality no one has a clue.
Microsoft hasn't embraced that ideology in close to more than a decade by now. Might be the time to let go of the boomer compulsion.
Better run for the lifeboat before the ship hits the iceberg.
I have no idea what the actual proportion is, nor how investors feel about this right now.
The true proportion of researchers who actively voice their political positions on twitter is probably much smaller and almost certainly a biased sample.
> It is Sam Altman. He will have one in a week.
His previous companies were Loopt and Worldcoin. Won't his next venture require finding someone else to piggyback off of?
> If you are interested in the money side, that isn't something to take lightly.
I am interested in how taking billions from foreign companies and states could lead to national security and conflict of interest problems.
> The new board also wants to slow development.
It's not a new board as far as I know.
What is this instability, in your view? And how is this “desired stability” going to come back?
For example, the GPT4 128K-token model is unavailable, and the GPT-4V model is also unavailable.
OpenAI will be even less open now. Ilya must protect all of us from his powerful creations.
Only he and his alignment team can be trusted.
For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”
If Ilya & co. want the staff to side with them, they have to give a reason first. It doesn't necessarily have to be convincing, but not giving a reason at all will never be convincing.
I'm not sure you appreciate how enterprise licence agreements work. Every detail of who owns what will have been spelled out, along with the copyright indemnities for the output.
No, there isn't a correct way to do anything in the real world, only in logic problems.
This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)
The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".
https://metarationality.com/probabilism
> Given that this didn't happen with anyone else
They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.
And of course, they think evil computer gods are going to kill them.
It's "such a big deal" because he has been leading the company, and apparently some people really like how and they really don't like how it ended.
Why would it require any other explanation? Are you asking what leaders do and why an employee would care about what they do...?
Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?
If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.
A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.
It's not a new board, but it's the time when the board decided to assert their power and make their statement/vision clear.
There is just a giant gap here where I simply do not get it, and I see no evidence that explains me not getting it is missing some key aspect of all this. This just seems like classic cargo cult, cult of personality, and following money and people who think they know best
I think you are right that Ilya didn't want to give out secret information to not open up himself to lawsuits.
“Difficult to understate” would mean he has little to no social capital.
It won't be hard for them to hire researchers and engineers, from OpenAI or other places.
Questions like this makes me wonder if you are a troll. I won't continue this thread.
There’s also a chatbot Elo ranking which crowd sources model comparisons https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...
GPT-4 is the king right now
> The plan some investors are considering is to make the board consider the situation “untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors.
[1] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
Edit: Looks like Microsoft just hired Sam outright: https://twitter.com/satyanadella/status/1726509045803336122
This is what I referred as "Cargo Cult AI". You can get the money, but money is not the only ingredient needed to make things happen.
edit: Looks like they won't have a brand new company next week, but joining an existing one.
So Ilya Sutskever, one of the most distinguished ML researchers of his generation does not understand the technology ?
The same guy who's been on record saying LLMs are enough for AGI ?
Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.
> buying castles
They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.
> scientific racism is real (though still buying mosquito nets for the people they're racist about)
Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.
> getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.
As far as I can tell, that's one guy.
> And of course, they think evil computer gods are going to kill them.
I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?
Context:
---------
1.1/ ILya Sukhar and Board do not agree with Sam Altman vision of a) too fast commercialization of Open AI AND/OR b) too fast progression to GPT-5 level
1.2/ Sam Altman thinks fast iteration and Commercialization is needed in-order to make Open AI financially viable as it is burning too much cash and stay ahead of competition.
1.3/ Microsoft, after investing $10+ Billions do not want this fight enable slow progress of AI Commercialization and fall behind Google AI etc..
a workable solution:
--------------------
2.1/ @sama @gdb form a new AI company, let us call it e/acc Inc.
2.2/ e/acc Inc. raises $3 Billions as SAFE instrument from VCs who believed in Sam Altman's vision.
2.3/ Open AI and e/acc Inc. reach an agreement such that:
a) GPT-4 IP transferred to e/acc Inc., this IP transfer is valued as $8 Billion SAFE instrument investment from Open AI into e/acc Inc.
b) existing Microsoft's 49% share in Open AI is transferred to e/acc Inc., such that Microsoft owns 49% of e/acc Inc.
c) the resulted "Lean and pure non-profit Open AI" with Ilya Sukhar and Board can steer AI progress as they wish, their stake in e/acc Inc. will act as funding source to cover their future Research Costs.
d) employees can join from Open AI to e/acc Inc. as they wish with no antipoaching lawsuits from OpenAI
This is like a bunch of people joining a basketball team where the coach starts turning it into a soccer team, and then the GM fires the coach for doing this and everyone calls the GM crazy and stupid. If you want to play soccer, go play soccer!
If you want to make a ton of money in a startup moving fast, how about don't setup a non-profit company spouting a bunch of humanitarian shit? It's even worse, because Altman very clearly did all this intentionally by playing the "I care about humanity card" just long enough while riding on the coattails of researchers where he could start up side processes to use his new AI profile to make the big bucks. But now people want to make him a martyr simply because the board called his bluff. It's bewildering.
Consider the relative charisma of the people around him, though.
It's like saying don't worry about global thermonuclear war because we haven't seen it yet.
The Neandethals on the other hand have encountered a super-intelligence.
One is Sutskever, who believes AI is very dangerous and must be slowed down and closed source (edit: clarified so that it doesn't sound like closed down). He believes this is in line with OpenAI's original charter.
Another is the HN open source crowd who believes AI should be developed quickly and be open to everyone. They believe this is in line with OpenAI's original charter.
Then there is Altman, who agrees that AI should be developed rapidly, but wants it to stay closed so he can directly profit by selling it. He probably believes this is in line with OpenAI's original charter, or at least the most realistic way to achieve it, effective altruism "earn to give" style.
Karpathy may be more amenable to the second perspective, which he may think Altman is closer to achieving.
In the field of AI, right now, "slowing down" is like deciding to stop the car and walk the track by foot in the middle of a Formula 1 race. It's like going backwards.
Unless things change from the current status quo, OpenAI will be irrelevant in less than 2 years. And of course many will quit such a company and go work somewhere where the CEO wants to innovate, not slow down.
He has a better chance than some other random guy who was not the CEO of OpenAI.
I want OpenAI to be absolutely crushed in the free market after this move. But it will take years for anyone to catch up with GPT-4, if even Anthropic is nowhere close.
She was the interim CEO; it seems that it was her and some of the rest of the executive team, not the board, that wanted Sam to come back. The board apparently was working on finding a new interim CEO to replace Mira that wasn't in Sam's camp more than it was trying to bring Sam back.
This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.
Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.
Here is the paper that introduced Decide Under Attack:
A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.
https://carnegieendowment.org/2019/09/10/commonsense-policy-...
I would think to myself, what if management ever had a small disagreement with me?
I quit a line cook job once in a very similar circumstance scaled down to a small restaurant. The inexperienced owners were making chaotic decisions and fired the chef and I quit the same day, not out of any kind of particular loyalty or anger, I just declined the chaos of the situation. Quitting before the chaos hurt me or my reputation by getting mixed up in it… to move on to other things.
My suspicion is that Microsoft will do exactly that: they will pull the money, sabotage the partnership deal and focus on rebuilding GPT in-house (with some of the key OpenAI people hired away). They will do this gradually, on their own timetable, so that it does not disrupt the GPT Azure access to their own customers.
I doubt that there could be a replacement for the Microsoft deal, because who would want to go through this again? OpenAI might be able to raise a billion or two from the hard core AI Safety enthusiasts, but they won't be able to raise $10s of Billions needed to run the next cycle of scaling.
https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...
In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.Do you? Because that part is way more irritating, and, honestly, starting to read your original comment I thought that was where you were going with this: Why was he fired, exactly?
The way the statement was framed basically painted him a liar, in a way, so vague, that people put forth the most insane theories about why. I can sense some animosity, but do you really think it's okay to fire anyone in a way, where to the outside the possible explanation ranges from a big data slip to molesting their sister?
Nothing has changed. That is the part that needs transparency and its lack is bewildering.
If a CEO of a non-profit is raising billions of dollars from foreign companies and states to create a product that he will then sell to the non-profit he is CEO of, I view that as adding instability to the non-profit given its original mission. Because that mission wasn't to create a market for the CEO to take advantage of for personal gain.
As for Altman... I don't understand what's insignificant about raising money and resources from outside groups? Even if he wasn't working directly on the product itself, that role is still valuable in that it means he knows the amounts of resources that kind of project will require while also commanding some amount of familiarity with how to allocate them effectively. And on top of that he seems understand how to monetize the existent product a lot better than the Ilya who mostly came out of this looking like a giant hazard for anyone who isn't wearing rose tinted sci-fi goggles.
Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).
It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?
Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.
(1) humanity should not be subjugated
(2) humanity should not go extinct before it’s our time
Even Kim Jong Un would agree with these principles.
Currently, any AGI or ASI built based on any of the known architectures contemplated in the literature which have been invented thus far would not meet a beyond-a-reasonable-doubt standard of being aligned with these two values.
I don’t know. The damage might be permanent. Everyone is probably going to be way more careful with what information they release and how they release it. Altman corrupted the entire community with his aggressive corporate push. The happy-go-lucky “look what we created” attitude of the community might be probably gone for good. Now every suit is going to be asking “can we make massive amount of money with this” or “can I spin up a hype train with this”.
Should they decide to sink to the level of VC scheming briefly, it will be like child's play for them.
They need to show they’re taking steps to stabilize things now that their hype factory has come unraveled.
I don’t think they particularly need these people , because they likely already have in house talent that is competitive. But having these people on board now will allow them to paint a much more stable picture to their shareholders.
The board claims Altman lied. Is that it? About what? Did he consistently misinform the board about a ton of different things? Or about one really important issue? Or is this just an excuse disguising the actual issues?
I notice a lot of people in the comments talking about Altman being more about profit than about OpenAI's original mission of developing safe, beneficial AGI. Is Altman threatening that mission or disagreeing with it? It would be really interesting if this was the real issue, but if it was, I can't believe it came out of nowhere like that, and I would expect the board to have a new CEO lined up already and not be fumbling for a new CEO and go for one with no particular AI or ethics background.
Sutskever gets mentioned as the primary force behind firing Altman. Is this a blatant power grab? Or is Sutskever known to have strong opinions about that mission of beneficial AGI?
I feel a bit like I'm expected to divine the nature of an elephant by only feeling a trunk and an ear.
OpenAI already had the best technology fully developed and in production when Microsoft invested in them.
I believe "cargo cult" means something quite different to how you're using it.
It's not "cargo cult" to consider someone's CV when you hire them for a new job. Sam Altman ran a successful AI company before and he most likely can do it again if provided enough support and resources.
Still, what do they actually want? It seems a bit overly dramatic for such an organisation.
Yes, but that doesn't mean it's enough. Not every random guy who wasn't the CEO of OpenAI is about to start an AI company (though some probably are).
It's quite possible an AI company does need a better vision than "hire some engineers and have them make AI".
For example, one scenario someone in a different thread conjectured is that Sam was secretly green-lighting the intentional (rather than incidental) collection of large amounts of copyrighted training data, exposing the firm to a great risk of a lawsuit from the media industry.
If he hid this from the board, “not being candid” would be the reason for his firing, but if the board admits that they know the details of the malfeasance, they could become entangled in the litigation.
You know those stories where someone makes a pact with the devil/djin/other wish granting entity, and the entity does one interpretation of what was wished, but since it is not what the wisher intended it all goes terribly wrong? The idea of alignment is to make the djin which not only can grant wishes, but it does them according to the unstated intention of the wisher.
You might have heard the story of the paper clip maximiser. The leadership of the paperclip factory buys one of those fancy new AI agents and asks it to maximise paperclip production.
What a not-well aligned AI might do: Reach out through the internet to a drug cartel’s communication nodes. Hack the communications and take over the operation. Optimise the drug traficking operations to gain more profit. Divert the funds to manufacture weapons for multiple competing factions on multiple crisis points on Earth. Use the factions against each other. Divert the funds and the weapons to protect a rapidly expanding paperclip factory. Manipulate and blackmail world leaders into inaction. If the original leaders of the paperclip factory try to stop the AI eliminate them, since that is the way to maximise paper clip production. And this is just the begining.
What a well alligned AI would do: Fine tune the paperclip manufacturing machinery to eliminate rejects. Reorganise the factory layout to optimise logistics. Run a succesfull advertising campaign which leads to a 130% increase in sales. (Because clearly this is what the factory owner intended it to do. Altough they did a poor job of expressing their wishes.)
Bear in mind that the cause of an equity market crash and its trigger are two different things.
The 2000 crash in Tech was caused by market speculation in enthusiastic dot-com companies with poor management YES, but the trigger was simply the DOJ finally making Bill throw a chair (they had enough of being humiliated by him for decades as they struggled with old mainframe tech and limited staffing).
If the dot-com crash trigger had not arrived for another 12-18 months, I’m sure the whole mess could have been swept under the rug by traders during the Black Swan event and the recovery of the healthy companies would have been 5-6 months, not 5-6 years (or 20 years in MSFT’s case).
The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.
The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.
> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).
https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...
Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.
It is wild to see how closely connected the web is though. Yudkowsky, Shear, and Sutskever. The EA movement today controls a staggering amount of power.
I think it's pretty obvious after reading it why people who were really committed to that Charter weren't happy with the direction that Sam was taking the company.
About him and Greg joining to Microsoft.
> I believe "cargo cult" means something quite different to how you're using it.
I don't think so.
Tribes believed that building wooden air strips or planes would bring the goods they have seen during wartime.
People believe that bringing Altman will bring the same thing (OpenAI as is) exactly where it's left off.
Altman is just tip of the iceberg. Might have some catalyst inside him, but he's not the research itself or the researcher himself.
Things you will never hear Satya Nadella say. Way more likely he will coordinate to unify as much of their workers as he can to continue on as a subsidiary, with the rest left to go work something out with other players crazy/desperate enough to trust them.
The dev bubble is not that small. This very website is I'm pretty sure not served from Windows.
Other than stack overflow or few handful of exceptions, very little is actually served from Windows if I'm not wrong.
> Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?
By other things getting better, not by stalling the leader of the pack.
As for the board's silence to the public, this should be obvious. Talking about their thinking/plans/reasons for firing Sam exposes them to all kinds of risk both legally and otherwise. The safe move is to stay quiet in public and continue talks with the relevant stakeholders (Microsoft, Sam + loyalists) in private
In fact, he is exactly the type to be on the board.
He is not the one saying 'slow down we might accidentally invent an AGI that takes over the world'. As you say, he says, LLMS are not a path to a world dominating AGI.
Specifically:
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.
People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.
We have a bunch of people talking about how worried they are and how we should slow down, and among them Sam Altman, and you see he was shipping fast. And Elon Musk, who also was concurrently working on his own AI startup while telling everyone how we should stop.
There's no stopping this and any person of at least average intelligence is fully aware of this. If a "top researcher" is in favor of not researching, then they're not a researcher. If a researcher doesn't want to ship anything they research, they're also not a researcher.
OpenAI has shipped nothing so far that is in any way suggesting the end of humanity or other such apocalyptic scenario. In total, these AI models have great potency in making our media, culture, civilization a mess of autogenerated content, and they can be very disruptive in a negative way. But no SINGLE COMPANY is in control of this. If it's not OpenAI, it'll be one of the other AI companies shipping comparable models right now.
OpenAI simply had the chance to lead, and they just gave up on it. Now some other company will lead. That's all that happened. OpenAI slowing down won't slow down AI in general. It just makes OpenAI irrelevant in 1-2 years time max.
'.. before it's our time' is definitely in the eye of the beholder.
That is, if you do not subscribe to one of the various theories that him sinking Twitter was intentional. The most popular ones I've come across are "Musk wants revenge for Twitter turning his daughter trans", "Saudi-Arabia wants to get rid of Twitter as a trusted-ish network/platform to prevent another Arab Spring" and "Musk wants to cozy up to a potential next Republican presidency".
Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.
This is fair that it's a distraction from the mission, but randomly firing him and now having all their staff leave and probably losing all the compute funding is a strange way of dealing with it.
> startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.
- The elite ML/AI researchers and engineers.
- The elite SV/tech venture capitalists.
These types come with their own followings - and I'm not saying that these two never intersect, but on one side you get a lot of brilliant researchers that truly are in it for the mission. They want to work there, because that's where ground zero is - both from the theoretical and applied point of view.
It's the ML/AI equivalent of working at CERN - you could pay the researchers nothing, or everything, and many wouldn't care - as long as they get to work on the things they are passionate about, AND they get to work with some of the most talented and innovative colleagues in the world. For these, it is likely more important to have top ML/AI heads in the organization, than a commercially-oriented CEO like Sam.
On the other side, you have the folks that are mostly chasing prestige and money. They see OpenAI as some sort of springboard into the elite world of top ML, where they'll spend a couple of years building cred, before launching startups, becoming VP/MD/etc. at big companies, etc. - all while making good money.
For the latter group, losing commercial momentum could indeed affect their will to work there. Do you sit tight in the boat, or do you go all-in on the next big player - if OpenAI crumbles the next year?
With that said, leadership conflicts and uncertainty is never good - whatever camp you're in.
Additionally, no-one (not insiders at OpenAI and certainly not a journalist) other than people in those conversations actually knows what happened, and noone other than Ilya actually knows why he did what he did. Everyone else is relying on rumor and heresay. For sure the closer people are to the matter the more insight they are likely to have, but noone who wasn't in the room actually knows.
Hmm, they're not a complete anything but they're pretty different as they're not discrete. That's how we can teach them undefinable things like writing styles. It seems like a good ingredient.
Personally I don't think you can create anything that's humanlike without being embodied in the world, which is mostly there to keep you honest and prevent you from mixing up your models (whatever they're made of) with reality. So that really limits how much "better" you can be.
> That's not "unknown unknowns", that's just the problem of induction.
This is the exact argument the page I linked discusses. (Or at least the whole book is.)
> However, it's not like there's any alternative system of knowledge that can do better.
So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it. (A religion meaning a principle you orient your life around that gives it unrealistically excessive meaning, aka the opposite of nihilism.)
> I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?
That's a great argument. The book I linked calls it "reasonableness". It's not a rational one though, so it's hard to use.
Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.
Main "logical" issue with it though is that it seems to ignore that things cost money, like where the evil AI is going to get the compute credits/GPUs/power bills to run itself.
But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.
Beside the argument that creshal brought up in a sibling comment that some people are more charismatic live and some are more charismatic through a camera:
In my observation, quite some programmers are much more immune to "charisma influence" (or rather: manipulation by charisma) than other people. For example, in the past someone sent me an old video of Elon Musk where in some TV show (I think) he explained how he wants to build a rocket to fly to the moon and the respective person claimed that this video makes you want Musk to succeed because of the confidence that Elon Musk shows. Well, this is not the impression that the video made on me ...
>The reason I was a founding donor to OpenAI in 2015 was not because I was interested in AI, but because I believed in Sam. So I hope the board can get its act together and bring Sam and Greg back.
I guess other people joined for similar reasons.
As regards the 'strange and disturbing' support, personally I thought OpenAI was doing cool stuff and it was a shame to break it because of internal politics.
Still, given the exodus and resources now available I’d imagine pretty fast
Sounds like firing was done to better serve the original mission, and is therefore probably a good thing. Though the way it's happening does come across as sloppy and panicky to me. Especially since they already replaced their first replacement CEO.
Edit: turns out Wikipedia already has a pretty good write up about the situation:
> "Sutskever is one of the six board members of the non-profit entity which controls OpenAI.[7] According to Sam Altman and Greg Brockman, Sutskever was the primary driver behind the November 2023 board meeting that led to Altman's firing and Brockman's resignation from OpenAI.[30][31] The Information reported that the firing in part resulted from a conflict over the extent to which the company should commit to AI safety.[32] In a company all-hands shortly after the board meeting, Sutskever stated that firing Altman was "the board doing its duty."[33] The firing of Altman and resignation of Brockman led to resignation of 3 senior researchers from OpenAI."
People making false accusations against you reflects poorly on _you_ now? What a world to live in.
> So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it.
I mean, nobody's actually done this. Honestly I hear more about Bayes' Theorem from rationality critics than rationalists. Do some people take it too far? Sure.
But also
> the real world isn't discrete
That's a strange objection. Our data channels are certainly discrete: a photon either hits your retina or it doesn't. Neurons firing or not is pretty discrete, physics is maybe discrete... I'd say reality being continuous is as much speculation as it being discrete is. At any rate, the problem of induction arises just as much in a discrete system as in a continuous one.
> Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.
Sure, but you should do that because you have no evidence for Russell's Teapot. The history of human evolution and current AI revolution are at least evidence for the possibility of superhuman intelligence.
"A teapot in orbit around Jupiter? Don't be ridiculous!" is maybe the worst possible argument against Russell's Teapot. There are strong reasons why there cannot be a teapot there, and this argument touches upon none of them.
If somebody comes to you with an argument that the British have started a secret space mission to Jupiter, and being British they'd probably taken a teapot along, then you will need to employ different arguments than if somebody asserted that the teapot just arose in orbit spontaneously. The catch-all argument about ridiculousness no longer works the same way. And hey, maybe you discover that the British did have a secret space program and a Jupiter cult in government. Proposing a logical argument creates points at which interacting with reality may change your mind. Scoffing and referring to science fiction gives you no such avenue.
> But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.
The thing is that reality really has no obligation to limit itself to what you consider reasonable threats. Was the asteroid that killed the dinosaurs a reasonable threat? It would have had zero precedents in their experience. Our notion of reasonableness is a heuristic built from experience, it's not a law. There's a famous term, "black swan", about failures of heuristics. But black swans are not "unknown unknowns"! No biologist would ever have said that black swans were impossible, even if they'd never seen nor heard of one. The problem of induction is not an excuse to give up on making predictions. If you know how animals work, the idea of a black swan is hardly out of context, and finding a black swan in the wild does not pose a problem for the field of biology. It is only common sense that is embarrassed by exceptions.
What I say is, both lost their status quo (OpenAI as the performer, Sam as the leader), and both will have to re-adjust and re-orient.
The magic smoke has been let out. Even if you restore the "configuration" of OpenAI with Sam and all employees before Friday, it's almost impossible to get the same company from these parts.
Again, Sam was part of what made OpenAI what it is, and without it, he won't be able to perform the same. Same is equally valid for OpenAI.
Things are changing, it's better to observe rather than dig for an entity or a person. Life is bigger than both of them, even when combined.
Seems like all these "business guys" think that's all it takes.
His previous endeavor was YC partner, right? So a rich VC turning to a CEO. To make even more money. How original. If any prominent figure was to be credited here beyond Ilya S., well that would probably be Musk. Not S.A. who as a YC partner/whatever played Russian roulette with other rich folks' money all these years... As for MS hiring S.A., they are just doing the smart thing: if S.A. is indeed that awesome and everyone misses the "charisma", he'll pioneer AI and even become the next MS CEO... Or Satya Nadela will have his own "Windows Phone" moment with SamAI ;)
Thus disproving your point, in my opinion. There may now be consequences to the board's decision that make their company less powerful in the future, but it won't be because they lacked the autonomy to make their own decisions. Getting to discover the consequences of your preferences is what autonomy is.
Wrong. Claude 2 beats GPT-4 is some benchmarks (e.g. HumanEval Python coding; math; analytical writing.). It's close enough. It doesn't matter who holds the crown this week, Anthropic definitely has ingredients to make GPT-4-class model.
This is like comparing similar cars from BMW and Toyota, finding few specific parameters where BMW has a higher score and saying "You see? Toyota engineering is nowhere close".
This actually shows Sam Altman's true contribution: the free version of ChatGPT is undeniably worse than Bing Chat, and yet ChatGPT is a bigger brand.
(And it might be a deliberate choice to save money for Claude 3 instead instead of making Claude 2 absolutely SotA.)
If your analysis is based solely off YouTube interviews, I think your perspective on Sam’s capabilities and personality is going to be pretty surface level and uninteresting.
Here's more about Justin.tv the new interim CEO. It isn't paywalled. https://www.cnbc.com/2023/11/20/who-is-emmett-shear-the-new-...
Wrong question. From the behavior of the board this weekend, it seems like the question is more "Do you understand how he was fired?".
IE: Immediately, on a Friday before Market close, before informing close partners (like Microsoft with 49% stake).
The "why" can be correct, but if the "how" is wrong that's even worse in some regards. It means that the board's thinking process is wrong and they'll likely make poor decisions in the future.
I don't know much about Sam Altman, but the behavior of the board was closer to a huge scandal. I was expecting news of some crazy misdeed of some kind, not just a simple misalignment with values.
Under these misalignment scenarios, you'd expect a stern talking to, and then a forced resignation over a few months. Not an immediate firing / removal. During this time, you'd inform Microsoft (and other partners) of the decision to get everyone on the same page, so it all elegantly resolves.
EDIT: And mind you, I don't even think the "why" has been well explained this weekend. That's part of the reason why "how" is important, to make sure the "why" gets explained clearly to everyone.
As he approached, lightning crackled around him, as if he was commanding the elements themselves. With a deft flick of his wrist, he sent a bolt of lightning to scare away a school of flying sharks that were drawn by the storm. Landing on the deck of my boat with the grace of a superhero, he surveyed the chaos.
"Need a hand with those lobsters?" he quipped, as he single-handedly wrangled the crustaceans with an efficiency that would put any seasoned fisherman to shame. But Sam wasn't done yet. With a mere glance, he reprogrammed my malfunctioning GPS using his mind, charting a course to safety.
As the boat rocked violently, a massive wave loomed over us, threatening to engulf everything. Sam, unfazed, simply turned to the wave and whispered a few unintelligible words. Incredibly, the wave halted in its tracks, parting around us like the Red Sea. He then casually conjured a gourmet meal from the lobsters, serving it with a fine wine that materialized out of thin air.
Just as quickly as he had appeared, Sam mounted his drone once more. "Time to go innovate the weather," he said with a wink, before soaring off into the storm, leaving behind a trail of rainbows.
As the skies cleared and the sea calmed, I realized that in the world of Silicon Valley CEOs, having a "Sam Altman saved my butt" story was more than just a rite of passage; it was a testament to the boundless, almost mythical capabilities of a man who defied the very laws of nature and business. And I, a humble lobster fisherman, had just become part of that legend.
Of the $46 Billion Twitter deal ($44 equity + $2 debt buyout), it was:
* $13 Billion Loans (bank funded)
* $33 Billion Equity -- of this, ~$9 Billion was estimated to be investors (including Musk, Saudis, Larry Ellison, etc. etc.)
So its about 30% other investors and 70% Elon Musk money.
Whether or not he works at the company is symbolic and indicative of who is in charge: the people who want to slow AI progress, or the people who want to speed it up.
This guy is a villain.
I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.
This definitely sounds like someone the average person - including the average tech worker, exceptionally income-engorged as they may be - would want heading the, "Manhattan Project but potentially for inconceivably sophisticated social/economic/mind control et al." project. /s
Sam will be leading a new division at Microsoft. He will do alright now that he has access to all of the required resources.
> better to observe rather than dig for an entity or a person
Yes agreed. I don't know much about Sam personally and don't care. OpenAI itself has not made any fundamental breakthroughs in AI research. AI is much bigger than these two.
It's like an appeal to authority against an authority that isn't even saying what you're appealing for.
...which several subreddits dedicated to LLM porn or trolling could tell you is both mostly pointless and also blocks a ton of stuff you could find on any high school nerd's bookshelf as "unsafe".
And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.
Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.
[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
@dang — any plans to do anything here
I mean not like you have to but yeah I can think of some stuff that could make this better probably (or at minimum experiments that could be run)
Also not on this post but in general I mean
@dang — any plans to do anything here
I mean not like you have to but yeah I can think of some stuff that could make this better probably
I mean not on this post in particular but as an HN issue if we agree it’s kind of degrading the experience and there are indeed likely fixes
I start to believe these workers are mostly financially motivated and that's why they follow him.
Pure f***g Greed. He is basically a front-man for a bunch of VCs/Angels/Influential Business Folks/Shady Investors/etc. who were betting on making big bucks through him.
Unfortunately, Ilya and his philosophical/ethical/moral stance has gotten in their way and hence they have let loose their dogs in the media to play up Sam Altman's "indispensability" to OpenAI.
However, pg has been working with many founders. And he has been working with Altman. I haven’t.
So while it may puzzle me, I do have to wonder what there is that I may be missing.
This is exactly it! Thanks for calling it out.
Sam Altman was just using the Researchers and their IP to enrich himself(and his select group of friends) while shafting everybody else including the researchers themselves.
The rest is "Halo Effect" - https://en.wikipedia.org/wiki/Halo_effect
tech execs are trained to be toned down when in public and the camera is on them(so they don't say something that will make -ve headlines later).
Example:
Elon's recent biography shows that he swears a lot casually while working (as do many of us). You wouldn't glean that from any of his public interviews.
I'm not actively worried about it, but let's not pretend something with all of the information in the world and great intelligence couldn't pull it off.