zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. HarHar+vu1[view] [source] 2024-03-01 19:23:01
>>modele+(OP)
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

◧◩
2. nicce+1w1[view] [source] 2024-03-01 19:30:18
>>HarHar+vu1
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.

◧◩◪
3. patcon+Fx1[view] [source] 2024-03-01 19:39:31
>>nicce+1w1
When I try to port your logic over into nuclear capacity it doesn't hold very well.

Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.

Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)

◧◩◪◨
4. mywitt+lI1[view] [source] 2024-03-01 20:42:11
>>patcon+Fx1
The difference between nuclear capability and AI capability is that you can't just rent out nuclear enrichment facilities on a per-hour basis, nor can you buy the components to build such facilities at a local store. But you can train AI models by renting AWS servers or building your own.

If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.

◧◩◪◨⬒
5. TeMPOr+0K1[view] [source] 2024-03-01 20:52:11
>>mywitt+lI1
AI isn't like nuclear weapons. AI is like bioweapons. The easier it is for anyone to play with highly potent pathogens, the more likely it is someone will accidentally end the world. With nukes, you need people on opposite sides to escalate from first detection to full-blown nuclear exchange; there's always a chance someone decides to not follow through with MAD. With bioweapons, it only takes one, and then there's no way to stop it.

Transparency doesn't serve us here.

◧◩◪◨⬒⬓
6. nicce+oL1[view] [source] 2024-03-01 21:00:37
>>TeMPOr+0K1
I would argue that AI isn't like bioweapons either.

Bioweapons do not have similar dual-use beneficial purpose as the AI does. As a result, AI development will continue regardless. It can give competitive advantage on any field.

Bioweapons are not exactly secret as well. Most of the methods to develop such things are open science. The restricting factor is that you potentially kill your own people as well, and the use-case is really just a weapon for some mad man, without other benefits.

Edit: To add, science behind "bioweapons" (or genetic modification of viruses/bacteria) are public exactly for the reason, that we could prevent the next future pandemic.

◧◩◪◨⬒⬓⬔
7. TeMPOr+DX1[view] [source] 2024-03-01 22:23:37
>>nicce+oL1
I elaborated on this in a reply to the comment parallel to yours, but: by "bioweapons" I really meant "science behind bioweapons", which happens to be just biotech. Biotech is, like any applied field, inherently dual-use. But unlike nuclear weapons, the techniques and tools scale down and, over time, become accessible to individuals.

The most risky parts of biotech, the ones directly related to bioweapons, are not made publicly accessible - but it's hard, as unlike with nukes, biotech is dual-use to the very end, so we have to balance prevention and defense with ease of creating deadly pathogens.

[go to top]