zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. HarHar+vu1[view] [source] 2024-03-01 19:23:01
>>modele+(OP)
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

◧◩
2. nicce+1w1[view] [source] 2024-03-01 19:30:18
>>HarHar+vu1
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.

◧◩◪
3. patcon+Fx1[view] [source] 2024-03-01 19:39:31
>>nicce+1w1
When I try to port your logic over into nuclear capacity it doesn't hold very well.

Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.

Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)

◧◩◪◨
4. mywitt+lI1[view] [source] 2024-03-01 20:42:11
>>patcon+Fx1
The difference between nuclear capability and AI capability is that you can't just rent out nuclear enrichment facilities on a per-hour basis, nor can you buy the components to build such facilities at a local store. But you can train AI models by renting AWS servers or building your own.

If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.

◧◩◪◨⬒
5. TeMPOr+0K1[view] [source] 2024-03-01 20:52:11
>>mywitt+lI1
AI isn't like nuclear weapons. AI is like bioweapons. The easier it is for anyone to play with highly potent pathogens, the more likely it is someone will accidentally end the world. With nukes, you need people on opposite sides to escalate from first detection to full-blown nuclear exchange; there's always a chance someone decides to not follow through with MAD. With bioweapons, it only takes one, and then there's no way to stop it.

Transparency doesn't serve us here.

◧◩◪◨⬒⬓
6. serf+0P1[view] [source] 2024-03-01 21:25:10
>>TeMPOr+0K1
it's the weirdest thing to compare nuclear weapons and biological catastrophe to tools that people around the world right now are using towards personal/professional/capitalistic benefit.

bioweapons is the thing, AI is a tool to make things. That's exactly the most powerful distinction here. Bioweapon research didn't also serendipitously make available powerful tools for the generation of images/sounds/text/ideas/plans -- so there isn't much reason to compare the benefit of the two.

These arguments aren't the same as "Let's ban the personal creation of terrifying weaponry", they're the same as "Let's ban wrenches and hack-saws because they can be used down the line in years from now to facilitate the create of terrifying weaponry" -- the problem with this argument being that it ignores the boons that such tools will allow for humanity.

Wrenches and hammers would have been banned too had they been framed as weapons of bludgeoning and torture by those that first encountered them. Thankfully people saw the benefits offered otherwise.

◧◩◪◨⬒⬓⬔
7. TeMPOr+vW1[view] [source] 2024-03-01 22:16:22
>>serf+0P1
Okay, I made a mistake of using a shorthand. I won't do that in the future. The shorthand is saying "nuclear weapons" and "bioweapons" when I meant "technology making it easy to create WMDs".

Consider nuclear nonproliferation. It doesn't only affect weapons - it also affects nuclear power generation, nuclear physics research and even medicine. There's various degrees of secrecy to research and technologies that affect "tools that people around the world right now are using towards personal/professional/capitalistic benefit". Why? Because the same knowledge makes military and terrorist applications easier, reducing barrier to entry.

Consider then, biotech, particularly synthetic biology and genetic engineering. All that knowledge is dual-use, and unlike with nuclear weapons, biotech seems to scale down well. As a result, we have both a growing industry and research field, and kids playing with those same techniques at school and at home. Biohackerspaces were already a thing over a decade ago (I would know, I tried to start one in my city circa 2013). There's a reason all those developments have been accompanied by a certain unease and fear. Today, an unlucky biohacker may give themselves diarrhea or cancer, in ten years, they may accidentally end the world. Unlike with nuclear weapons, there's no natural barrier to scaling this capability down to individual level.

And of course, between the diarrhea and the humanity-ending "hold my beer and watch this" gain-of-function research, there's whole range of smaller things like getting a community sick, or destroying a local ecosystem. And I'm only talking about accidents with peaceful/civilian work here, ignoring deliberate weaponization.

To get a taste of what I'm talking about: if you buy into the lab leak hypothesis for COVID-19, then this is what a random fuckup at a random BSL-4 lab looks like, when we are lucky and get off easy. That is why biotech is another item on the x-risks list.

Back to the point: the AI x-risk is fundamentally more similar to biotech x-risk than nuclear x-risk, because the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released. The threat dynamics are similar to a highly-virulent pathogen, and not to a nuclear exchange between nation states - hence the comparison I've made in the original comment.

◧◩◪◨⬒⬓⬔⧯
8. casual+By2[view] [source] 2024-03-02 04:30:16
>>TeMPOr+vW1
> the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released.

I also worry every time I drop a hammer from my waist that it could bounce and kill everyone I love. Really anyone on the planet could drop a hammer which bounces and kills everyone I love. That is why hammers are an 'x-risk'

◧◩◪◨⬒⬓⬔⧯▣
9. TeMPOr+F65[view] [source] 2024-03-03 09:18:11
>>casual+By2
Ha ha. A more realistic worry is that you could sneeze and kill everyone you love with whatever gave you runny nose.

Which is why you take your course of antibiotics to the end, because superbugs are a thing.

[go to top]