zlacker

[parent] [thread] 24 comments
1. tracer+(OP)[view] [source] 2024-03-02 01:11:15
It says "will seek to open source technology for the public benefit when applicable" they have open sourced a number of things, Whisper most notably. Nothing about that is a promise to open source everything and they just need to say it wasn't applicable for ChatGPT or DallE because of safety.
replies(3): >>stubis+K3 >>thayne+r7 >>canjob+2j
2. stubis+K3[view] [source] 2024-03-02 01:53:19
>>tracer+(OP)
I doubt the safety argument will hold up in court. Anything safe enough to allow Microsoft or others access too would be safe enough to release publicly. Our AI overlords are not going to respect an NDA. And for the public safety/disinformation side of things, I think it is safe to say that cat is out of the bag and chasing the horse that has bolted.
replies(4): >>sanxiy+i5 >>bandya+da >>tintor+Sd >>WhatIs+KY
◧◩
3. sanxiy+i5[view] [source] [discussion] 2024-03-02 02:11:17
>>stubis+K3
I am unsure. You can't (for example) fine tune over API. Is anything safe for Microsoft to fine tune really safe for Russia, CCP, etc. to fine tune? Open weight (which I think is more accurate term than open source here) models enable both much more actors and much more actions than the current status.
replies(1): >>pclmul+hc
4. thayne+r7[view] [source] 2024-03-02 02:39:36
>>tracer+(OP)
I think that position would be a lot more defensible if they weren't giving another for-profit company access to it. And there is definitely a conflict of interest when not revealing the source gives them a competitive advantage in selling their product. There's also the question of if the source is too dangerous to make public, how can they be sure the final product is safe? An argument could be made it isn't safe.
replies(1): >>thepti+qa
◧◩
5. bandya+da[view] [source] [discussion] 2024-03-02 03:15:18
>>stubis+K3
If the above statement is the only “commitment” they’ve made to open-source, then that argument won’t need to be made in court. They just need to reference the vague language that basically leaves the door open to do anything they want.
◧◩
6. thepti+qa[view] [source] [discussion] 2024-03-02 03:18:49
>>thayne+r7
It’s easy to defend this position.

It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.

If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.

This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).

replies(2): >>Walter+wl >>isaacf+DN
◧◩◪
7. pclmul+hc[view] [source] [discussion] 2024-03-02 03:44:47
>>sanxiy+i5
You can fine tune over the API. Also, Russia and the CCP likely have the model weights. They probably have spies in OpenAI or Microsoft with access to the weights.
replies(3): >>Wander+Yi >>HeavyS+zt >>petre+S61
◧◩
8. tintor+Sd[view] [source] [discussion] 2024-03-02 04:07:40
>>stubis+K3
`Anything safe enough to allow Microsoft or others access too would be safe enough to release publicly.`

This makes absolutely no sense.

replies(1): >>rl3+Bg
◧◩◪
9. rl3+Bg[view] [source] [discussion] 2024-03-02 04:42:51
>>tintor+Sd
>This makes absolutely no sense.

>>34716375

What about now?

replies(1): >>cutemo+vI1
◧◩◪◨
10. Wander+Yi[view] [source] [discussion] 2024-03-02 05:11:59
>>pclmul+hc
Interesting thought experiment! How would they best take advantage of the weights and what would be signs/actions that we could observe that signal it is likely they have the weights?
replies(1): >>simfre+jY1
11. canjob+2j[view] [source] 2024-03-02 05:13:46
>>tracer+(OP)
The "when applicable" gets them out of nearly anything.
◧◩◪
12. Walter+wl[view] [source] [discussion] 2024-03-02 05:53:04
>>thepti+qa
This was all obvious >before< they wrote the charter.
replies(1): >>thepti+gE1
◧◩◪◨
13. HeavyS+zt[view] [source] [discussion] 2024-03-02 07:29:52
>>pclmul+hc
I don't think such speculation would _hold in court_
replies(1): >>pclmul+Fa1
◧◩◪
14. isaacf+DN[view] [source] [discussion] 2024-03-02 11:55:05
>>thepti+qa
Open source would also mean it is available to sanctioned countries like china.
◧◩
15. WhatIs+KY[view] [source] [discussion] 2024-03-02 14:06:07
>>stubis+K3
https://arxiv.org/abs/2311.03348

This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.

◧◩◪◨
16. petre+S61[view] [source] [discussion] 2024-03-02 15:23:48
>>pclmul+hc
They'll train it on Xi Jingping Thought so that the people of China can move on with their lives and use the Xi bot instead of wasting precious man hours actually studying the texts.

The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.

◧◩◪◨⬒
17. pclmul+Fa1[view] [source] [discussion] 2024-03-02 15:56:17
>>HeavyS+zt
A quick Google search has confirmed that Microsoft has confirmed at least the Russia part:

https://www.cyberark.com/resources/blog/apt29s-attack-on-mic...

It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.

◧◩◪◨
18. thepti+gE1[view] [source] [discussion] 2024-03-02 19:41:01
>>Walter+wl
I don’t think this belief was widespread at all at that time.

Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.

(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)

replies(1): >>Walter+sH7
◧◩◪◨
19. cutemo+vI1[view] [source] [discussion] 2024-03-02 20:22:21
>>rl3+Bg
Microsoft doesn't run troll farms trying to manipulate the voters to change the US to a dictatorship, or develop killer drone swarms or have nukes.

(Not saying OpenAI isn't greedy)

replies(2): >>salawa+YM1 >>rl3+q62
◧◩◪◨⬒
20. salawa+YM1[view] [source] [discussion] 2024-03-02 21:02:09
>>cutemo+vI1
...What OS do you think many of these places use? Linux is still niche af. In a real, tangible way, it may very well be the case that yes, Microsoft does, in fact, run them.
replies(1): >>Dylan1+ai2
◧◩◪◨⬒
21. simfre+jY1[view] [source] [discussion] 2024-03-02 22:42:05
>>Wander+Yi
We know Microsoft experienced a full breach of Office 365/Microsoft 365 and Azure infrastructure by a nation state actor: https://www.imprivata.com/blog/strengthening-security-5-less...
◧◩◪◨⬒
22. rl3+q62[view] [source] [discussion] 2024-03-02 23:49:40
>>cutemo+vI1
I think you make a good point. My argument was that Microsoft's security isn't that great, therefore the risk of the model ending up in the hands of the bad actors you mention isn't sufficiently low.
replies(1): >>cutemo+bJ2
◧◩◪◨⬒⬓
23. Dylan1+ai2[view] [source] [discussion] 2024-03-03 01:50:23
>>salawa+YM1
The "..." is not warranted because that is clearly not the sense of "run" they were talking about.
◧◩◪◨⬒⬓
24. cutemo+bJ2[view] [source] [discussion] 2024-03-03 08:12:43
>>rl3+q62
Aha, ok thanks for explaining
◧◩◪◨⬒
25. Walter+sH7[view] [source] [discussion] 2024-03-05 01:41:36
>>thepti+gE1
IMHE, it's been part of widespread discussions in the AI research and AI safety communities since the 2000s.
[go to top]