zlacker

[parent] [thread] 4 comments
1. Random+(OP)[view] [source] 2023-05-16 14:17:55
You have a very strong hypothesis about the AI system just being able to "think up" such a bioweapon (and also the researchers being clueless in implementation). I see doomsday scenarios often assuming strong advances in sciences in the AI etc. - there is little evidence for that kind of "thinkism".
replies(2): >>HDThor+mg >>someth+ws
2. HDThor+mg[view] [source] 2023-05-16 15:33:28
>>Random+(OP)
Humanity has already created bioweapons. The AI just needs to find the paper that describes them.
3. someth+ws[view] [source] 2023-05-16 16:18:58
>>Random+(OP)
The whole "LLMs are not just a fancy auto-complete" argument is based on the fact that they seem to be doing stuff beyond what they are explicitly programmed to do or were expected to do. Even at the current infant scale there doesn't seem to be an efficient way of detecting these emergent properties. Moreover, the fact that you don't need to understand what LLM does is kind of the selling point. The scale and capabilities of AI will grow. It isn't obvious how any incentive to limit or understand those capabilities would appear from their business use.

If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.

replies(1): >>reveli+h71
◧◩
4. reveli+h71[view] [source] [discussion] 2023-05-16 19:26:59
>>someth+ws
"Are Emergent Abilities of Large Language Models a Mirage?"

https://arxiv.org/pdf/2304.15004.pdf

our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.

replies(1): >>someth+kH2
◧◩◪
5. someth+kH2[view] [source] [discussion] 2023-05-17 08:28:28
>>reveli+h71
Sure, there is a distinct possibility that emergent abilities of LLMs are an illusion, and I personally would prefer it to be that way. I'm just pointing out that AI optimism without AI caution is dumb.
[go to top]