zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. HarHar+vu1[view] [source] 2024-03-01 19:23:01
>>modele+(OP)
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

◧◩
2. nicce+1w1[view] [source] 2024-03-01 19:30:18
>>HarHar+vu1
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.

◧◩◪
3. patcon+Fx1[view] [source] 2024-03-01 19:39:31
>>nicce+1w1
When I try to port your logic over into nuclear capacity it doesn't hold very well.

Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.

Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)

◧◩◪◨
4. codetr+WL1[view] [source] 2024-03-01 21:04:17
>>patcon+Fx1
So in other words, one day we will see a state actor make something akin to Stuxnet again but this time instead of targeting the SCADA systems of a specific power plant in Iran, they will make one that targets the GPU farm of some country they suspect of secretly working on AGI.
[go to top]