zlacker

[parent] [thread] 6 comments
1. kashya+(OP)[view] [source] 2023-11-18 11:34:29
Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").

replies(2): >>mcpack+94 >>22c+Ik
2. mcpack+94[view] [source] 2023-11-18 12:03:57
>>kashya+(OP)
Consciousness has no technical meaning. Even for other humans, it is a (good and morally justified) leap of faith to assume that other humans have thought processes that roughly resemble your own. It's a matter philosophers debate and science cannot address. Science cannot disprove the p-zombie hypothesis because nobody can devise an empirical test for consciousness.
replies(1): >>hhsect+9i
◧◩
3. hhsect+9i[view] [source] [discussion] 2023-11-18 13:35:32
>>mcpack+94
I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.

replies(2): >>mcpack+eA >>pixl97+GT
4. 22c+Ik[view] [source] 2023-11-18 13:51:19
>>kashya+(OP)
OpenAI (re?)defines AGI as a general AI that is able to perform most tasks as good as or better than a human. It's possible that under this definition and by skewing certain metrics, they are quite close to "AGI" in the same way that Google has already achieved "quantum supremacy".
replies(1): >>anon29+Vt
◧◩
5. anon29+Vt[view] [source] [discussion] 2023-11-18 14:43:32
>>22c+Ik
How has open ai enumerated a list of tasks humans can do? What a useless definition. By a reasonable interpretation of this definition we are already here. Given chat Gpts constraints (ingesting and outgoing only text), it already performs better than most humans...

Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.

◧◩◪
6. mcpack+eA[view] [source] [discussion] 2023-11-18 15:22:40
>>hhsect+9i
I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)
◧◩◪
7. pixl97+GT[view] [source] [discussion] 2023-11-18 17:10:02
>>hhsect+9i
I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.

[go to top]