zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. kypro+6e[view] [source] 2023-05-16 12:36:29
>>srslac+I7
Even if you're correct about the capabilities of LLMs (I don't think you are), there are still obvious dangers here.

I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.

For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.

From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.

But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.

Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.

Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

---

The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.

And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.

◧◩◪
3. Random+3y[view] [source] 2023-05-16 14:17:55
>>kypro+6e
You have a very strong hypothesis about the AI system just being able to "think up" such a bioweapon (and also the researchers being clueless in implementation). I see doomsday scenarios often assuming strong advances in sciences in the AI etc. - there is little evidence for that kind of "thinkism".
◧◩◪◨
4. someth+z01[view] [source] 2023-05-16 16:18:58
>>Random+3y
The whole "LLMs are not just a fancy auto-complete" argument is based on the fact that they seem to be doing stuff beyond what they are explicitly programmed to do or were expected to do. Even at the current infant scale there doesn't seem to be an efficient way of detecting these emergent properties. Moreover, the fact that you don't need to understand what LLM does is kind of the selling point. The scale and capabilities of AI will grow. It isn't obvious how any incentive to limit or understand those capabilities would appear from their business use.

If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.

[go to top]