zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. baidif+aq[view] [source] 2023-11-17 22:16:39
>>davidb+(OP)
- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.

◧◩
2. podnam+js[view] [source] 2023-11-17 22:27:17
>>baidif+aq
Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.

◧◩◪
3. kashya+hH[view] [source] 2023-11-17 23:36:48
>>podnam+js
From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] >>35960125

◧◩◪◨
4. kdmcco+AT[view] [source] 2023-11-18 00:33:56
>>kashya+hH
> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.

◧◩◪◨⬒
5. thepti+531[view] [source] 2023-11-18 01:26:03
>>kdmcco+AT
I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.

◧◩◪◨⬒⬓
6. zer00e+E61[view] [source] 2023-11-18 01:47:52
>>thepti+531
How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.

◧◩◪◨⬒⬓⬔
7. Me1000+0f1[view] [source] 2023-11-18 02:52:40
>>zer00e+E61
This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.

The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.

◧◩◪◨⬒⬓⬔⧯
8. zer00e+Zm1[view] [source] 2023-11-18 03:50:22
>>Me1000+0f1
So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.

Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.

[go to top]