zlacker

[parent] [thread] 12 comments
1. nopins+(OP)[view] [source] 2023-11-22 08:00:12
Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.

The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?

.

Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: >>38376263

replies(3): >>kvgr+J1 >>nickpp+V4 >>miracu+LT5
2. kvgr+J1[view] [source] 2023-11-22 08:14:22
>>nopins+(OP)
If somebody wanted to do a biological attack, there is probably not much stopping them even now.
replies(1): >>nopins+w2
◧◩
3. nopins+w2[view] [source] [discussion] 2023-11-22 08:19:43
>>kvgr+J1
The expertise to produce the substance itself is quite rare so it's hard to carry it out unnoticed. AI could make it much easier to develop it in one's basement.
replies(2): >>swells+a4 >>DebtDe+ds
◧◩◪
4. swells+a4[view] [source] [discussion] 2023-11-22 08:33:16
>>nopins+w2
Huh, you'd think all you need are some books on the subject and some fairly generic lab equipment. Not sure what a neural net trained on Internet dumps can add to that? The information has to be in the training data for the AI to be aware of it, correct?
replies(1): >>nopins+55
5. nickpp+V4[view] [source] 2023-11-22 08:37:52
>>nopins+(OP)
> Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.

And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.

replies(1): >>nopins+ua
◧◩◪◨
6. nopins+55[view] [source] [discussion] 2023-11-22 08:39:31
>>swells+a4
GPT-4 is likely trained on some data not publicly available as well.

There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.

◧◩
7. nopins+ua[view] [source] [discussion] 2023-11-22 09:26:32
>>nickpp+V4
> Don't forget that it would also increase the power of the good guys.

In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.

“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.

replies(2): >>fallin+qr >>nickpp+331
◧◩◪
8. fallin+qr[view] [source] [discussion] 2023-11-22 11:55:26
>>nopins+ua
> In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.

replies(1): >>Number+tc3
◧◩◪
9. DebtDe+ds[view] [source] [discussion] 2023-11-22 11:59:56
>>nopins+w2
The Tokyo Subway attack you referenced above happened in 1995 and didn't require AI. The information required can be found on the internet or in college textbooks. I suppose an "AI" in the sense of a chatbot can make it easier by summarizing these sources, but no one sufficiently motivated (and evil) would need that technology to do it.
◧◩◪
10. nickpp+331[view] [source] [discussion] 2023-11-22 15:16:19
>>nopins+ua
Doomerism was quite common throughout mankind’s history but all dire predictions invariably failed, from the “population bomb” to “grey goo” and “igniting the atmosphere” with a nuke. Populists however, were always quite eager to “protect us” - if only we’d give them the power.

But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.

Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.

replies(1): >>thedud+De4
◧◩◪◨
11. Number+tc3[view] [source] [discussion] 2023-11-23 02:49:17
>>fallin+qr
I would say...not necessarily. The technology that lets someone create a gun does not give the ability to make bulletproof armor or the ability to treat life-threatening gunshot wounds. Or take nerve gases, as another example. It's entirely possible that we can learn how to make horrible pathogens without an equivalent means of curing them.

Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.

◧◩◪◨
12. thedud+De4[view] [source] [discussion] 2023-11-23 13:39:27
>>nickpp+331
doomerism at the society level which overrides individual freedoms definitely occurs: covid lockdowns, takeover of private business to fund/supply the world wars, gov mandates around "man made" climate change.
13. miracu+LT5[view] [source] 2023-11-23 23:36:15
>>nopins+(OP)
Such attacks cannot be stopped by outlawing technology.
[go to top]