zlacker

[parent] [thread] 23 comments
1. kypro+(OP)[view] [source] 2023-05-16 12:36:29
Even if you're correct about the capabilities of LLMs (I don't think you are), there are still obvious dangers here.

I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.

For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.

From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.

But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.

Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.

Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

---

The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.

And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.

replies(8): >>davidg+F >>touris+81 >>supriy+u2 >>Random+Xj >>reveli+4k >>throwa+qr >>wkat42+Kv >>salmon+AH
2. davidg+F[view] [source] 2023-05-16 12:40:42
>>kypro+(OP)
Sci-fi is a hell of a drug
replies(1): >>orbita+21
◧◩
3. orbita+21[view] [source] [discussion] 2023-05-16 12:42:17
>>davidg+F
Shout out to his family.
4. touris+81[view] [source] 2023-05-16 12:42:48
>>kypro+(OP)
You seems to imply sentience from this "ai".
5. supriy+u2[view] [source] 2023-05-16 12:49:47
>>kypro+(OP)
People have been able to commit malicious acts by themselves historically, no AI needed.

In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.

replies(2): >>shaneb+p3 >>kypro+z3
◧◩
6. shaneb+p3[view] [source] [discussion] 2023-05-16 12:55:18
>>supriy+u2
Regulation is the only tool for minimizing crime. Other mechanisms, such as police, respond to crime after-the-fact.
replies(1): >>helloj+rZ
◧◩
7. kypro+z3[view] [source] [discussion] 2023-05-16 12:56:35
>>supriy+u2
This isn't a trick question, genuinely curious – do you agree that guns are not the problem and should not be regulated – yes, while they can be used for harm, the right approach to gun violence is to police the crime.

I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.

replies(2): >>supriy+5c >>tome+5P
◧◩◪
8. supriy+5c[view] [source] [discussion] 2023-05-16 13:40:48
>>kypro+z3
AI systems provide many benefits to society, such as image recognition, anomaly detection, educational and programming used of LLMs, to name a few.

Guns only have a primarily harmful use which is to kill or injure someone. While that act of killing may be justified when the person violates societal values in some way, making regular citizens the decision makers in whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment; which is why the same argument I made above doesn’t apply for guns.

replies(2): >>logicc+xm >>menset+yP
9. Random+Xj[view] [source] 2023-05-16 14:17:55
>>kypro+(OP)
You have a very strong hypothesis about the AI system just being able to "think up" such a bioweapon (and also the researchers being clueless in implementation). I see doomsday scenarios often assuming strong advances in sciences in the AI etc. - there is little evidence for that kind of "thinkism".
replies(2): >>HDThor+jA >>someth+tM
10. reveli+4k[view] [source] 2023-05-16 14:18:22
>>kypro+(OP)
> so instead it uses its training data to seek other ways to reduce human populations without extermination.

This is a real problem, but it's already problem with our society, not AI. Misaligned public intellectuals routinely try to reduce the human population and we don't lift a finger. Focus where the danger actually is - us!

From Scott Alexander's latest post:

Paul Ehrlich is an environmentalist leader best known for his 1968 book The Population Bomb. He helped develop ideas like sustainability, biodiversity, and ecological footprints. But he’s best known for prophecies of doom which have not come true - for example, that collapsing ecosystems would cause hundreds of millions of deaths in the 1970s, or make England “cease to exist” by the year 2000.

Population Bomb calls for a multi-pronged solution to a coming overpopulation crisis. One prong was coercive mass sterilization. Ehrlich particularly recommended this for India, a country at the forefront of rising populations.

In 1975, India had a worse-than-usual economic crisis and declared martial law. They asked the World Bank for help. The World Bank, led by Robert McNamara, made support conditional on an increase in sterilizations. India complied [...] In the end about eight million people were sterilized over the course of two years.

Luckily for Ehrlich, no one cares. He remains a professor emeritus at Stanford, and president of Stanford’s Center for Conservation Biology. He has won practically every environmental award imaginable, including from the Sierra Club, the World Wildlife Fund, and the United Nations (all > 10 years after the Indian sterilization campaign he endorsed). He won the MacArthur “Genius” Prize ($800,000) in 1990, the Crafoord Prize ($700,000, presented by the King of Sweden) that same year, and was made a Fellow of the Royal Society in 2012. He was recently interviewed on 60 Minutes about the importance of sustainability; the mass sterilization campaign never came up. He is about as honored and beloved as it’s possible for a public intellectual to get.

replies(1): >>johnti+Py
◧◩◪◨
11. logicc+xm[view] [source] [discussion] 2023-05-16 14:30:11
>>supriy+5c
>whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment

Have you any empirical evidence at all on this? From what I've seen the open carry states in the US are generally higher trust environments (as was the US in past when more people carried). People feel safer when they know somebody can't just assault, rob or rape them without them being able to do anything to defend themselves. Is the Tenderloin a high trust environment?

12. throwa+qr[view] [source] 2023-05-16 14:53:04
>>kypro+(OP)
To be fair to the AI, overpopulation or rather overconsumption is a problem for humanity. If people think we can consume at current rates and have the resources to maintain our current standard of living (at least in a western sense) for even a hundred years, they’re delusional.
13. wkat42+Kv[view] [source] 2023-05-16 15:13:56
>>kypro+(OP)
> This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

I don't think this would be a bad thing :) Some people will always be immune, humanity wouldn't die out. And it would be a humane way for gradual population reduction. It would create some temporary problems with elderly care (like what China is facing now) but will make long term human prosperity much more likely. We just can't keep growing against limited resources.

The Dan Brown book Inferno had a similar premise and I was disappointed they changed the ending in the movie so that it didn't happen.

◧◩
14. johnti+Py[view] [source] [discussion] 2023-05-16 15:27:33
>>reveli+4k
Wow, what a turd. Reminds me of James Watson
◧◩
15. HDThor+jA[view] [source] [discussion] 2023-05-16 15:33:28
>>Random+Xj
Humanity has already created bioweapons. The AI just needs to find the paper that describes them.
16. salmon+AH[view] [source] 2023-05-16 16:00:47
>>kypro+(OP)
> From its training data GPT-7 might notice

> But its "aligned" so might understand

> Using this information it decides to hack

I think you're anthropomorphizing LLM's too much here. If we assume that there's a AGI-esque AI, then of course we should be worried about an AGI-esque AI. But I see no reason to think that's the case.

replies(1): >>HDThor+yT1
◧◩
17. someth+tM[view] [source] [discussion] 2023-05-16 16:18:58
>>Random+Xj
The whole "LLMs are not just a fancy auto-complete" argument is based on the fact that they seem to be doing stuff beyond what they are explicitly programmed to do or were expected to do. Even at the current infant scale there doesn't seem to be an efficient way of detecting these emergent properties. Moreover, the fact that you don't need to understand what LLM does is kind of the selling point. The scale and capabilities of AI will grow. It isn't obvious how any incentive to limit or understand those capabilities would appear from their business use.

If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.

replies(1): >>reveli+er1
◧◩◪
18. tome+5P[view] [source] [discussion] 2023-05-16 16:28:54
>>kypro+z3
> do you agree that guns are not the problem and should not be regulated

But AI is not like guns in this analogy. AI is closer to machine tools.

◧◩◪◨
19. menset+yP[view] [source] [discussion] 2023-05-16 16:30:48
>>supriy+5c
I think game theory around mutually assured destruction has convinced me that the world is a safer place when a number of countries have nuclear weapons.

The same thing might also be true in relation to guns and the government's monopoly on violence.

Extending that to AI, the world will probably be a safer place if there are far more AI systems competing with each other and in the hands of citizens.

◧◩◪
20. helloj+rZ[view] [source] [discussion] 2023-05-16 17:12:46
>>shaneb+p3
Aren't regulations just laws that are enforced after they're broken like other after-the-fact crimes?
replies(1): >>shaneb+T81
◧◩◪◨
21. shaneb+T81[view] [source] [discussion] 2023-05-16 17:57:14
>>helloj+rZ
Partially, I suppose.

The risk vs. reward component also needs to be managed in order to deter criminal behavior. This starts with regulation.

For the record, I believe regulation of AI/ML is ridiculous. This is nothing more than a power grab.

◧◩◪
22. reveli+er1[view] [source] [discussion] 2023-05-16 19:26:59
>>someth+tM
"Are Emergent Abilities of Large Language Models a Mirage?"

https://arxiv.org/pdf/2304.15004.pdf

our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.

replies(1): >>someth+h13
◧◩
23. HDThor+yT1[view] [source] [discussion] 2023-05-16 21:55:58
>>salmon+AH
The whole issue with near term alignment is that people will anthropomorphize AI. That’s what it being unaligned means, it’s treated like a responsible person when it in fact is not. I don’t think it’s hard at all to think of a scenario where a dumb as rocks agentic ai gives itself the task of accumulating more power since its training data says having power helps solve problems. From there it again doesn’t have to be anything other than a stochastic parrot to order people to do horrible things.
◧◩◪◨
24. someth+h13[view] [source] [discussion] 2023-05-17 08:28:28
>>reveli+er1
Sure, there is a distinct possibility that emergent abilities of LLMs are an illusion, and I personally would prefer it to be that way. I'm just pointing out that AI optimism without AI caution is dumb.
[go to top]