zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. troupe+Kd1[view] [source] 2024-03-01 18:04:16
>>modele+(OP)
If OpenAI became a non-profit with this in its charter:

“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"

I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.

◧◩
2. stubis+qi2[view] [source] 2024-03-02 01:00:33
>>troupe+Kd1
So much of the discussion here is about being a non-profit, but per your quote I think the key is open source. Here we have people investing in an open source company, and the company never opened their source. Rather than open source technology everyone could profit from, they kept everything closed and sold exclusive access. I think it is going to be hard for OpenAI to defend their behavior, and a huge amount of damages to be claimed for all the money investors had to spend catching up.
◧◩◪
3. tracer+ij2[view] [source] 2024-03-02 01:11:15
>>stubis+qi2
It says "will seek to open source technology for the public benefit when applicable" they have open sourced a number of things, Whisper most notably. Nothing about that is a promise to open source everything and they just need to say it wasn't applicable for ChatGPT or DallE because of safety.
◧◩◪◨
4. thayne+Jq2[view] [source] 2024-03-02 02:39:36
>>tracer+ij2
I think that position would be a lot more defensible if they weren't giving another for-profit company access to it. And there is definitely a conflict of interest when not revealing the source gives them a competitive advantage in selling their product. There's also the question of if the source is too dangerous to make public, how can they be sure the final product is safe? An argument could be made it isn't safe.
◧◩◪◨⬒
5. thepti+It2[view] [source] 2024-03-02 03:18:49
>>thayne+Jq2
It’s easy to defend this position.

It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.

If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.

This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).

◧◩◪◨⬒⬓
6. Walter+OE2[view] [source] 2024-03-02 05:53:04
>>thepti+It2
This was all obvious >before< they wrote the charter.
◧◩◪◨⬒⬓⬔
7. thepti+yX3[view] [source] 2024-03-02 19:41:01
>>Walter+OE2
I don’t think this belief was widespread at all at that time.

Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.

(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)

[go to top]