zlacker

[parent] [thread] 4 comments
1. mitthr+(OP)[view] [source] 2023-05-17 03:20:58
What qualifies in your view as a "real model"? Do we need to empirically observe the human race going extinct first?
replies(1): >>Satam+9Z2
2. Satam+9Z2[view] [source] 2023-05-17 23:08:20
>>mitthr+(OP)
No but we should still be level headed. It's like arguing that because 1) fire kills and 2) spreads, that the whole earth will thus be engulfed in fire and everyone will die.

If there's no real model behind this, I can argue just as well that a sufficiently intelligent AGI will be able to protect me from any harm because it's so smart and powerful.

And newsflash, if nothing changes, we're all going to die anyway. As it is, our existence is quite limited and it is only through constant creation of unnatural contraptions that we have managed to improve our quality of life.

replies(1): >>mitthr+Lz5
◧◩
3. mitthr+Lz5[view] [source] [discussion] 2023-05-18 18:43:50
>>Satam+9Z2
OK. Well, it turns out that the rationale behind AI risk is a bit more sophisticated than that.
replies(1): >>Satam+ps8
◧◩◪
4. Satam+ps8[view] [source] [discussion] 2023-05-19 15:53:55
>>mitthr+Lz5
Not ok. Show me evidence that the AI panic is driven by such accepted and sophisticated models.
replies(1): >>mitthr+iIC
◧◩◪◨
5. mitthr+iIC[view] [source] [discussion] 2023-05-30 00:18:10
>>Satam+ps8
Take your pick?

https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

https://intelligence.org/files/AlignmentHardStart.pdf

https://www.youtube.com/watch?v=pYXy-A4siMw

https://europepmc.org/article/med/26185241

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

I could keep going but the reading on this spans for tens of thousands of pages of detailed reasoning and analysis, including mathematical proofs and lab-scale experiments.

[go to top]