zlacker

[parent] [thread] 2 comments
1. tomrod+(OP)[view] [source] 2023-05-16 15:19:43
The NYT piece implied that, but no, his concern was less existential singularity and more on immoral use.
replies(1): >>cma+E71
2. cma+E71[view] [source] 2023-05-16 20:31:18
>>tomrod+(OP)
Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.

replies(1): >>tomrod+0b1
◧◩
3. tomrod+0b1[view] [source] [discussion] 2023-05-16 20:49:30
>>cma+E71
Then we created God(s) and rightfully should worship it to appease its unknowable and ineffable nature.

Or recognize that existing AI might be great at generating human cognitive artifacts but doesn't yet hit that logical thought.

[go to top]