zlacker

[parent] [thread] 19 comments
1. patcon+(OP)[view] [source] 2024-03-01 19:39:31
When I try to port your logic over into nuclear capacity it doesn't hold very well.

Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.

Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)

replies(7): >>Vetch+e9 >>mywitt+Ga >>codetr+he >>freedo+Qg >>tibann+nl >>a_wild+Ku >>andoan+Vw
2. Vetch+e9[view] [source] 2024-03-01 20:33:14
>>patcon+(OP)
This is a poor analogy, a better one would be nuclear physics. An expert in nuclear physics can develop positively impactful energy generation methods or very damaging nuclear weapons.

It's not because of arcane secrets that so few nations have nuclear weapons, all you need is a budget, time and brilliant physicists and engineers. The reason we don't have more is largely down to surveillance, economics, challenge of reliable payload delivery, security assurances, agreements and various logistical challenges.

Most countries are open and transparent about their nuclear efforts due to the diplomatic advantages. There are also methods to trace and detect secret nuclear tests and critical supply chains can be monitored. Countries who violate these norms can face anything from heavy economic sanctions and isolation to sabotage of research efforts. On the technical side, having safe and reliable launch capacity is arguably as much if not more of a challenge than the bomb itself. Logistical issues include mass manufacture (merely having capacity only paints a target on your back with no real gains) and safe storage. There are a great many reasons why it is simply not worth going forward with nuclear weapons. This calculus changes however, if a country has cause for fear for their continued existence, as is presently the case for some Eastern European countries.

3. mywitt+Ga[view] [source] 2024-03-01 20:42:11
>>patcon+(OP)
The difference between nuclear capability and AI capability is that you can't just rent out nuclear enrichment facilities on a per-hour basis, nor can you buy the components to build such facilities at a local store. But you can train AI models by renting AWS servers or building your own.

If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.

replies(1): >>TeMPOr+lc
◧◩
4. TeMPOr+lc[view] [source] [discussion] 2024-03-01 20:52:11
>>mywitt+Ga
AI isn't like nuclear weapons. AI is like bioweapons. The easier it is for anyone to play with highly potent pathogens, the more likely it is someone will accidentally end the world. With nukes, you need people on opposite sides to escalate from first detection to full-blown nuclear exchange; there's always a chance someone decides to not follow through with MAD. With bioweapons, it only takes one, and then there's no way to stop it.

Transparency doesn't serve us here.

replies(2): >>nicce+Jd >>serf+lh
◧◩◪
5. nicce+Jd[view] [source] [discussion] 2024-03-01 21:00:37
>>TeMPOr+lc
I would argue that AI isn't like bioweapons either.

Bioweapons do not have similar dual-use beneficial purpose as the AI does. As a result, AI development will continue regardless. It can give competitive advantage on any field.

Bioweapons are not exactly secret as well. Most of the methods to develop such things are open science. The restricting factor is that you potentially kill your own people as well, and the use-case is really just a weapon for some mad man, without other benefits.

Edit: To add, science behind "bioweapons" (or genetic modification of viruses/bacteria) are public exactly for the reason, that we could prevent the next future pandemic.

replies(1): >>TeMPOr+Yp
6. codetr+he[view] [source] 2024-03-01 21:04:17
>>patcon+(OP)
So in other words, one day we will see a state actor make something akin to Stuxnet again but this time instead of targeting the SCADA systems of a specific power plant in Iran, they will make one that targets the GPU farm of some country they suspect of secretly working on AGI.
7. freedo+Qg[view] [source] 2024-03-01 21:21:56
>>patcon+(OP)
The lack of nukes isn't because of restriction of information. That lasted about as long as it took to leak the info to Soviets. It's far more complicated than that.

The US (and other nations) is not too friendly toward countries developing nukes. There are significant threats against them.

Also perspective is an interesting thing. Non-nuclear countries like Iran and (in the past) North Korea that get pushed around by western governments probably wouldn't agree that restriction is for the best. They would probably explain how nukes and the threat of destruction/MAD make people a lot more understanding, respectful, and restrained. Consider how Russia has been handled the past few years, compared to say Iraq.

(To be clear I'm not saying we should YOLO with nukes and other weapon information/technology, I'm just saying I think it's a lot more complicated an issue than it at first seems, and in the end it kind of comes down to who has the power, and who does not have the power, and the people without the power probably won't like it).

replies(2): >>14u2c+5m >>maximu+UT1
◧◩◪
8. serf+lh[view] [source] [discussion] 2024-03-01 21:25:10
>>TeMPOr+lc
it's the weirdest thing to compare nuclear weapons and biological catastrophe to tools that people around the world right now are using towards personal/professional/capitalistic benefit.

bioweapons is the thing, AI is a tool to make things. That's exactly the most powerful distinction here. Bioweapon research didn't also serendipitously make available powerful tools for the generation of images/sounds/text/ideas/plans -- so there isn't much reason to compare the benefit of the two.

These arguments aren't the same as "Let's ban the personal creation of terrifying weaponry", they're the same as "Let's ban wrenches and hack-saws because they can be used down the line in years from now to facilitate the create of terrifying weaponry" -- the problem with this argument being that it ignores the boons that such tools will allow for humanity.

Wrenches and hammers would have been banned too had they been framed as weapons of bludgeoning and torture by those that first encountered them. Thankfully people saw the benefits offered otherwise.

replies(2): >>TeMPOr+Qo >>patcon+691
9. tibann+nl[view] [source] 2024-03-01 21:52:05
>>patcon+(OP)
If my grandmother had wheels, she would have been a bike.
◧◩
10. 14u2c+5m[view] [source] [discussion] 2024-03-01 21:57:13
>>freedo+Qg
This is absolutely correct. It goes beyond just the US too. In my estimation non-proliferation a core objective of the UN security council.
replies(1): >>sudosy+WY
◧◩◪◨
11. TeMPOr+Qo[view] [source] [discussion] 2024-03-01 22:16:22
>>serf+lh
Okay, I made a mistake of using a shorthand. I won't do that in the future. The shorthand is saying "nuclear weapons" and "bioweapons" when I meant "technology making it easy to create WMDs".

Consider nuclear nonproliferation. It doesn't only affect weapons - it also affects nuclear power generation, nuclear physics research and even medicine. There's various degrees of secrecy to research and technologies that affect "tools that people around the world right now are using towards personal/professional/capitalistic benefit". Why? Because the same knowledge makes military and terrorist applications easier, reducing barrier to entry.

Consider then, biotech, particularly synthetic biology and genetic engineering. All that knowledge is dual-use, and unlike with nuclear weapons, biotech seems to scale down well. As a result, we have both a growing industry and research field, and kids playing with those same techniques at school and at home. Biohackerspaces were already a thing over a decade ago (I would know, I tried to start one in my city circa 2013). There's a reason all those developments have been accompanied by a certain unease and fear. Today, an unlucky biohacker may give themselves diarrhea or cancer, in ten years, they may accidentally end the world. Unlike with nuclear weapons, there's no natural barrier to scaling this capability down to individual level.

And of course, between the diarrhea and the humanity-ending "hold my beer and watch this" gain-of-function research, there's whole range of smaller things like getting a community sick, or destroying a local ecosystem. And I'm only talking about accidents with peaceful/civilian work here, ignoring deliberate weaponization.

To get a taste of what I'm talking about: if you buy into the lab leak hypothesis for COVID-19, then this is what a random fuckup at a random BSL-4 lab looks like, when we are lucky and get off easy. That is why biotech is another item on the x-risks list.

Back to the point: the AI x-risk is fundamentally more similar to biotech x-risk than nuclear x-risk, because the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released. The threat dynamics are similar to a highly-virulent pathogen, and not to a nuclear exchange between nation states - hence the comparison I've made in the original comment.

replies(1): >>casual+W01
◧◩◪◨
12. TeMPOr+Yp[view] [source] [discussion] 2024-03-01 22:23:37
>>nicce+Jd
I elaborated on this in a reply to the comment parallel to yours, but: by "bioweapons" I really meant "science behind bioweapons", which happens to be just biotech. Biotech is, like any applied field, inherently dual-use. But unlike nuclear weapons, the techniques and tools scale down and, over time, become accessible to individuals.

The most risky parts of biotech, the ones directly related to bioweapons, are not made publicly accessible - but it's hard, as unlike with nukes, biotech is dual-use to the very end, so we have to balance prevention and defense with ease of creating deadly pathogens.

13. a_wild+Ku[view] [source] 2024-03-01 22:55:23
>>patcon+(OP)
Self annihilation fails due to nuclear proliferation, i.e MAD. So your conclusion is backward.

But that's irrelevant anyway, because nukes are a terrible analogy. If you insist on sci-fi speculation, use an analogy that's somewhat remotely similar -- perhaps compare the development of AI vs. traditional medicine. They're both very general technologies with incredible benefits and important dangers (e.g. superbugs, etc).

replies(1): >>TeMPOr+jK
14. andoan+Vw[view] [source] 2024-03-01 23:12:11
>>patcon+(OP)
Why would the logic hold well to begin with when using an analogy?
◧◩
15. TeMPOr+jK[view] [source] [discussion] 2024-03-02 00:55:22
>>a_wild+Ku
If you insist on sci-fi analogy, then try protomolecule from The Expanse. Or a runaway grey goo scenario triggered by a biotech or nanotech accident.

Artificial general intelligence is not a stick you can wield and threaten other countries with. It's a process, complex beyond our understanding.

◧◩◪
16. sudosy+WY[view] [source] [discussion] 2024-03-02 03:58:24
>>14u2c+5m
Every single member of the UNSC has facilitated nuclear proliferation at some point. Literally every single one, without exception. It's not really a core objective.
◧◩◪◨⬒
17. casual+W01[view] [source] [discussion] 2024-03-02 04:30:16
>>TeMPOr+Qo
> the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released.

I also worry every time I drop a hammer from my waist that it could bounce and kill everyone I love. Really anyone on the planet could drop a hammer which bounces and kills everyone I love. That is why hammers are an 'x-risk'

replies(1): >>TeMPOr+0z3
◧◩◪◨
18. patcon+691[view] [source] [discussion] 2024-03-02 06:14:35
>>serf+lh
> it's the weirdest thing to compare nuclear weapons and biological catastrophe to tools that people around the world right now are using towards personal/professional/capitalistic benefit.

You're literally painting a perfect analogy for biotech/nuclear/AI. Catastrophe and culture-shifting benefits go hand in hand with all of them. It's about figuring out where the lines are. But claiming there is minimal or negligible risk ("so let's just run with it" as some say, maybe not you) feels very cavalier to me.

But you're not alone, if you feel that way. I feel like I'm taking crazy pills with how the software dev field talks about sharing AI openly.

And I'm literally an open culture advocate for over a decade, and have helped hundreds of ppl start open community projects. If there's anyone who's be excited for open collaboration, it's me! :)

◧◩
19. maximu+UT1[view] [source] [discussion] 2024-03-02 15:35:38
>>freedo+Qg
Yeah, I doubt Russia would attack Ukraine like this if they hadn't gotten rid of all their nukes.
◧◩◪◨⬒⬓
20. TeMPOr+0z3[view] [source] [discussion] 2024-03-03 09:18:11
>>casual+W01
Ha ha. A more realistic worry is that you could sneeze and kill everyone you love with whatever gave you runny nose.

Which is why you take your course of antibiotics to the end, because superbugs are a thing.

[go to top]