zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. Shank+Qf[view] [source] 2023-11-18 09:27:43
>>convex+(OP)
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

◧◩
2. Lacerd+Ng[view] [source] 2023-11-18 09:36:44
>>Shank+Qf
If MS gets their hands on an AGI help us god, but no "organizational safeguards" will matter.

Not that I think AGI is possible or desirable in the first place, but that's a different discussion.

◧◩◪
3. concor+Pm[view] [source] 2023-11-18 10:27:18
>>Lacerd+Ng
Impossible with LLMs, with currently known techniques or impossible full stop?
◧◩◪◨
4. wil421+es[view] [source] 2023-11-18 11:14:54
>>concor+Pm
Impossible with computers full stop. IMHO, we may be able to slice together DNA or modify it to create a new or smarter organism than AGI in a computer.

They already shifted goal posts and they’ll do it again. AI used to mean AGI but marketing got a hold of it. Once something resembling AGI comes out they’ll say well it’s not Level 5 AGI or something similar.

◧◩◪◨⬒
5. pixl97+3v1[view] [source] 2023-11-18 17:43:03
>>wil421+es
>AI used to mean AGI but marketing got a hold of it

It does not and never has.

What happens with the term AI as time has progressed is more to do with the word Intelligence itself. When we went about trying to prescribe intelligence to systems we started to realize we were really bad a doing the same with animal humans systems. We were also terrible at separating what is component level and systems level intelligence. For example, you seem to think that intelligence requires meat, but you don't give any reasoning for that conclusion.

These lists of problems with what intelligence is will get worse over time as we build more capable systems and learn about new forms of intelligence we didn't expect possible.

[go to top]