zlacker

[parent] [thread] 6 comments
1. archag+(OP)[view] [source] 2024-03-01 20:11:01
I shudder at a world where only corporations had nukes.
replies(2): >>reduce+v >>chasd0+P4
2. reduce+v[view] [source] 2024-03-01 20:14:42
>>archag+(OP)
And yet, still safer than everyone having nukes...

It's unfortunate that the AGI debate still hasn't made it's way very far into these parts. Still have people going, "well this would be bad too." Yes! That is the existential problem a lot of people are grappling with. There is currently and likely, no good way out of this. Too much "Don't Look Up" going on.

3. chasd0+P4[view] [source] 2024-03-01 20:40:15
>>archag+(OP)
nuclear weapons is a ridiculous comparison and only furthers the gas lighting of society. At the barest of bare minimums, AI might, possibly, theoretically, perhaps pose a threat to established power structures (like any disruptive technology does). However, a nuclear weapon definitely destroys physical objects within its effective range. Relating the two is ridiculous.
replies(2): >>esafak+8c >>reduce+8c2
◧◩
4. esafak+8c[view] [source] [discussion] 2024-03-01 21:26:39
>>chasd0+P4
A disembodied intelligent agent could still trigger or manipulate a person into triggering a weapon.
replies(1): >>jerbea+qh
◧◩◪
5. jerbea+qh[view] [source] [discussion] 2024-03-01 22:02:34
>>esafak+8c
So can a human, yet we don't ban those. I don't think AI is going to get better at manipulating people than a sufficiently skilled human.

What might be scary is using AI for a mass influence operation, propaganda to convince people that, for example, using a weapon is necessary.

replies(1): >>esafak+Xi
◧◩◪◨
6. esafak+Xi[view] [source] [discussion] 2024-03-01 22:13:34
>>jerbea+qh
We do prosecute humans who misuse weapons. The problem with AI is that the potential for damage is hard to even gauge; potentially an extinction event, so we have to take more precautions than just prosecuting after the fact. And if the AI has agency, one might argue that it is responsible... what then?
◧◩
7. reduce+8c2[view] [source] [discussion] 2024-03-02 18:34:39
>>chasd0+P4
It's not a ridiculous comparison. This thread involves Sam Altman and Elon Musk, right?

Sam Altman:"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

In the essay "Why You Should Fear Machine Intelligence" https://blog.samaltman.com/machine-intelligence-part-1

So, more than nukes then...

Elon Musk: "There’s a strong probability that it [AGI] will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity."

[go to top]