zlacker

[parent] [thread] 2 comments
1. hhsect+(OP)[view] [source] 2023-11-18 13:35:32
I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.

replies(2): >>mcpack+5i >>pixl97+xB
2. mcpack+5i[view] [source] 2023-11-18 15:22:40
>>hhsect+(OP)
I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)
3. pixl97+xB[view] [source] 2023-11-18 17:10:02
>>hhsect+(OP)
I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.

[go to top]