zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. Shank+Qf[view] [source] 2023-11-18 09:27:43
>>convex+(OP)
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

◧◩
2. kashya+Vs[view] [source] 2023-11-18 11:18:43
>>Shank+Qf
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

◧◩◪
3. fennec+1v[view] [source] 2023-11-18 11:34:25
>>kashya+Vs
Why not? It's on topic.

Should people discussing nuclear energy not talk about fusion?

◧◩◪◨
4. kashya+qv[view] [source] 2023-11-18 11:37:07
>>fennec+1v
Fair question. I meant it should be talked with more nuance and specifics, as the definition of "AGI" is what you make of it.

Also, I hope my response to tempestn clarifies a bit more.

Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)

◧◩◪◨⬒
5. pixl97+Dt1[view] [source] 2023-11-18 17:35:12
>>kashya+qv
I'd say this falls into an even more base question...

What is intelligence?

This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.

These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.

[go to top]