zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. baidif+aq[view] [source] 2023-11-17 22:16:39
>>davidb+(OP)
- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.

◧◩
2. podnam+js[view] [source] 2023-11-17 22:27:17
>>baidif+aq
Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.

◧◩◪
3. kashya+hH[view] [source] 2023-11-17 23:36:48
>>podnam+js
From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] >>35960125

◧◩◪◨
4. kdmcco+AT[view] [source] 2023-11-18 00:33:56
>>kashya+hH
> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.

◧◩◪◨⬒
5. thepti+531[view] [source] 2023-11-18 01:26:03
>>kdmcco+AT
I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.

◧◩◪◨⬒⬓
6. kdmcco+b61[view] [source] 2023-11-18 01:44:46
>>thepti+531
I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.

I'm not fully convinced, though...

> if you publish a model with scary capabilities you can’t undo that action.

This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.

Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)

All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.

◧◩◪◨⬒⬓⬔
7. thepti+Lj1[view] [source] 2023-11-18 03:25:06
>>kdmcco+b61
The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.

I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.

◧◩◪◨⬒⬓⬔⧯
8. kimixa+Tx1[view] [source] 2023-11-18 05:09:40
>>thepti+Lj1
Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"
◧◩◪◨⬒⬓⬔⧯▣
9. thepti+JE1[view] [source] 2023-11-18 05:59:29
>>kimixa+Tx1
Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.

But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.

◧◩◪◨⬒⬓⬔⧯▣▦
10. sudosy+BG1[view] [source] 2023-11-18 06:17:47
>>thepti+JE1
Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)

Anyway, the point is, obfuscation doesn't work to keep scary technology away.

[go to top]