zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. Shank+Qf[view] [source] 2023-11-18 09:27:43
>>convex+(OP)
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

◧◩
2. kashya+Vs[view] [source] 2023-11-18 11:18:43
>>Shank+Qf
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

◧◩◪
3. tempes+St[view] [source] 2023-11-18 11:26:34
>>kashya+Vs
There is no need to understand how consciousness works to develop AGI.
◧◩◪◨
4. kashya+2v[view] [source] 2023-11-18 11:34:29
>>tempes+St
Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").

◧◩◪◨⬒
5. mcpack+bz[view] [source] 2023-11-18 12:03:57
>>kashya+2v
Consciousness has no technical meaning. Even for other humans, it is a (good and morally justified) leap of faith to assume that other humans have thought processes that roughly resemble your own. It's a matter philosophers debate and science cannot address. Science cannot disprove the p-zombie hypothesis because nobody can devise an empirical test for consciousness.
◧◩◪◨⬒⬓
6. hhsect+bN[view] [source] 2023-11-18 13:35:32
>>mcpack+bz
I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.

◧◩◪◨⬒⬓⬔
7. pixl97+Io1[view] [source] 2023-11-18 17:10:02
>>hhsect+bN
I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.

[go to top]