zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. Shank+Qf[view] [source] 2023-11-18 09:27:43
>>convex+(OP)
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

◧◩
2. kashya+Vs[view] [source] 2023-11-18 11:18:43
>>Shank+Qf
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

◧◩◪
3. tempes+St[view] [source] 2023-11-18 11:26:34
>>kashya+Vs
There is no need to understand how consciousness works to develop AGI.
◧◩◪◨
4. kashya+2v[view] [source] 2023-11-18 11:34:29
>>tempes+St
Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").

◧◩◪◨⬒
5. 22c+KP[view] [source] 2023-11-18 13:51:19
>>kashya+2v
OpenAI (re?)defines AGI as a general AI that is able to perform most tasks as good as or better than a human. It's possible that under this definition and by skewing certain metrics, they are quite close to "AGI" in the same way that Google has already achieved "quantum supremacy".
[go to top]