zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. baidif+aq[view] [source] 2023-11-17 22:16:39
>>davidb+(OP)
- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.

◧◩
2. podnam+js[view] [source] 2023-11-17 22:27:17
>>baidif+aq
Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.

◧◩◪
3. kashya+hH[view] [source] 2023-11-17 23:36:48
>>podnam+js
From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] >>35960125

◧◩◪◨
4. kdmcco+AT[view] [source] 2023-11-18 00:33:56
>>kashya+hH
> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.

◧◩◪◨⬒
5. thepti+531[view] [source] 2023-11-18 01:26:03
>>kdmcco+AT
I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.

◧◩◪◨⬒⬓
6. zer00e+E61[view] [source] 2023-11-18 01:47:52
>>thepti+531
How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.

◧◩◪◨⬒⬓⬔
7. Applej+5i2[view] [source] 2023-11-18 11:48:59
>>zer00e+E61
Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.

It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.

And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.

There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.

The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.

Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.

In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.

[go to top]