zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. gkober+z1[view] [source] 2023-11-18 23:00:36
>>medler+(OP)
I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662

◧◩
2. ren_en+J3[view] [source] 2023-11-18 23:08:51
>>gkober+z1
everything about it screams amateur hour, from the language and timing of the press release to the fact they didn't notify Microsoft. And how they apparently completely failed to see how employees and customers would react to the news, Ilya saying the circumstances for Altman's removal "weren't ideal" shows how naive they were. They had no PR strategy to control the narrative and let rumors run wild

I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - >>38323939

Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money

◧◩◪
3. spacem+o4[view] [source] 2023-11-18 23:12:30
>>ren_en+J3
This story that they want him back turns it from amateur hour to peak clownshow.

This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI

◧◩◪◨
4. ren_en+M6[view] [source] 2023-11-18 23:23:20
>>spacem+o4
actually wild to think about how something like this can even be allowed to happen considering OpenAI has(had) a roughly 90B valuation and it being important to the US from a geopolitical strategy perspective.

comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company

◧◩◪◨⬒
5. SllX+la[view] [source] 2023-11-18 23:42:05
>>ren_en+M6
Sure, it's important in some ways, but most corporations aren't direct subordinates of the US Government.

The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.

The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.

◧◩◪◨⬒⬓
6. adastr+Cn[view] [source] 2023-11-19 00:54:44
>>SllX+la
All of them are when they become national security concerns. The executive branch could write the OpenAI board a letter directing them on what to do if it were a national security need. This has been done many times before, though usually limited to the defense industry in wartime, but as Snowden has showed it has been done in tech as well.
◧◩◪◨⬒⬓⬔
7. SllX+sB[view] [source] 2023-11-19 02:28:46
>>adastr+Cn
Except that is literally not true and the Government loses in court to private citizens and corporations all the time because surprise: people in America have rights and that extends to their businesses.

In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.

◧◩◪◨⬒⬓⬔⧯
8. adastr+JM[view] [source] 2023-11-19 03:43:15
>>SllX+sB
The PATRIOT act extended the wartime powers act to apply in peacetime, and there are other more obscure authorizations that could be used. I used to work in the defense industry. It was absolutely common knowledge that the government could step in to nationalize control (though not the profits of) private industry when required. This has been done in particular when there are rare resources needed for supersonic then stealth technology during the Cold War, and uranium in the 40’s and 50’s.
[go to top]