zlacker

[parent] [thread] 21 comments
1. kashya+(OP)[view] [source] 2023-11-18 11:18:43
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

replies(7): >>tempes+X >>fennec+62 >>rpigab+y6 >>lagran+a7 >>umanwi+a8 >>anon29+qw >>andomi+kV
2. tempes+X[view] [source] 2023-11-18 11:26:34
>>kashya+(OP)
There is no need to understand how consciousness works to develop AGI.
replies(2): >>kashya+72 >>Robert+AJ
3. fennec+62[view] [source] 2023-11-18 11:34:25
>>kashya+(OP)
Why not? It's on topic.

Should people discussing nuclear energy not talk about fusion?

replies(2): >>kashya+v2 >>adamma+x4
◧◩
4. kashya+72[view] [source] [discussion] 2023-11-18 11:34:29
>>tempes+X
Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").

replies(2): >>mcpack+g6 >>22c+Pm
◧◩
5. kashya+v2[view] [source] [discussion] 2023-11-18 11:37:07
>>fennec+62
Fair question. I meant it should be talked with more nuance and specifics, as the definition of "AGI" is what you make of it.

Also, I hope my response to tempestn clarifies a bit more.

Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)

replies(1): >>pixl97+I01
◧◩
6. adamma+x4[view] [source] [discussion] 2023-11-18 11:52:43
>>fennec+62
We know that fusion is a very common process that inevitably happens to even the simplest elements if you just make them hot enough, we just don't know how to do that in a controlled manner. We don't really know what intelligence is, how it came about, or how we would ever recreate it artificially or if that's possible. LLMs are some pretty convincing tricks though but that's on the level of making some loud noises behind a curtain and calling it fusion.
◧◩◪
7. mcpack+g6[view] [source] [discussion] 2023-11-18 12:03:57
>>kashya+72
Consciousness has no technical meaning. Even for other humans, it is a (good and morally justified) leap of faith to assume that other humans have thought processes that roughly resemble your own. It's a matter philosophers debate and science cannot address. Science cannot disprove the p-zombie hypothesis because nobody can devise an empirical test for consciousness.
replies(1): >>hhsect+gk
8. rpigab+y6[view] [source] 2023-11-18 12:06:05
>>kashya+(OP)
Yeah, it's almost like the metaverse.
9. lagran+a7[view] [source] 2023-11-18 12:12:37
>>kashya+(OP)
After all, OpenAI's original mission was to create the first AGI, before some bad guys do, iirc.
10. umanwi+a8[view] [source] 2023-11-18 12:17:40
>>kashya+(OP)
AGI does not require consciousness.
replies(2): >>inpare+nH >>Robert+kJ
◧◩◪◨
11. hhsect+gk[view] [source] [discussion] 2023-11-18 13:35:32
>>mcpack+g6
I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.

replies(2): >>mcpack+lC >>pixl97+NV
◧◩◪
12. 22c+Pm[view] [source] [discussion] 2023-11-18 13:51:19
>>kashya+72
OpenAI (re?)defines AGI as a general AI that is able to perform most tasks as good as or better than a human. It's possible that under this definition and by skewing certain metrics, they are quite close to "AGI" in the same way that Google has already achieved "quantum supremacy".
replies(1): >>anon29+2w
◧◩◪◨
13. anon29+2w[view] [source] [discussion] 2023-11-18 14:43:32
>>22c+Pm
How has open ai enumerated a list of tasks humans can do? What a useless definition. By a reasonable interpretation of this definition we are already here. Given chat Gpts constraints (ingesting and outgoing only text), it already performs better than most humans...

Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.

14. anon29+qw[view] [source] 2023-11-18 14:44:56
>>kashya+(OP)
An AGI system is not human and shouldn't be treated as such. Consciousness is not a trait of intelligence. Consciousness usually requires quaila which puts animals ahead of computers.
replies(1): >>ajmurm+Psi
◧◩◪◨⬒
15. mcpack+lC[view] [source] [discussion] 2023-11-18 15:22:40
>>hhsect+gk
I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)
◧◩
16. inpare+nH[view] [source] [discussion] 2023-11-18 15:54:18
>>umanwi+a8
What is AGI ?

What is consciousness ?

◧◩
17. Robert+kJ[view] [source] [discussion] 2023-11-18 16:04:41
>>umanwi+a8
Maybe but we also don’t know what AGI requires.
◧◩
18. Robert+AJ[view] [source] [discussion] 2023-11-18 16:06:24
>>tempes+X
That’s a hypothesis. It may not be true, as we have yet to build AGI
19. andomi+kV[view] [source] 2023-11-18 17:07:36
>>kashya+(OP)
Yes we should absolutely talk about that because it's a key contributor to a lot of the worry about letting Sam continue to go around and do stuff like strong arming the US government in public. He's getting high on his own supply. And I don't think he is going to be allowed to continue fucking around like that. And that goes for any scientists that that have joined up in his apocalyptic and extremely dangerous worldview as well.
◧◩◪◨⬒
20. pixl97+NV[view] [source] [discussion] 2023-11-18 17:10:02
>>hhsect+gk
I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.

◧◩◪
21. pixl97+I01[view] [source] [discussion] 2023-11-18 17:35:12
>>kashya+v2
I'd say this falls into an even more base question...

What is intelligence?

This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.

These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.

◧◩
22. ajmurm+Psi[view] [source] [discussion] 2023-11-23 04:08:19
>>anon29+qw
How do you know intelligence isn't sufficient and that computers cannot have qualia. Any incoming information could result in qualia. Just because we cannot imagine them doesn't mean they cannot be someone's subjective experience
[go to top]