zlacker

[parent] [thread] 3 comments
1. digbyb+(OP)[view] [source] 2023-05-16 13:05:22
I’ll have to dig it up but the last interview I saw with him, he was focused more on existential risk from the potential for super intelligence, not just misuse.
replies(1): >>tomrod+Pr
2. tomrod+Pr[view] [source] 2023-05-16 15:19:43
>>digbyb+(OP)
The NYT piece implied that, but no, his concern was less existential singularity and more on immoral use.
replies(1): >>cma+tz1
◧◩
3. cma+tz1[view] [source] [discussion] 2023-05-16 20:31:18
>>tomrod+Pr
Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.

replies(1): >>tomrod+PC1
◧◩◪
4. tomrod+PC1[view] [source] [discussion] 2023-05-16 20:49:30
>>cma+tz1
Then we created God(s) and rightfully should worship it to appease its unknowable and ineffable nature.

Or recognize that existing AI might be great at generating human cognitive artifacts but doesn't yet hit that logical thought.

[go to top]