zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. Shank+Qf[view] [source] 2023-11-18 09:27:43
>>convex+(OP)
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

◧◩
2. keepam+Xx[view] [source] 2023-11-18 11:56:01
>>Shank+Qf
I think the surprising truth is that all of these people are essentially replaceable.

They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.

The Singularity train has already left the station.

Inevitability.

Now humanity is just waiting for it to arrive at our stop

◧◩◪
3. bernie+rz[view] [source] 2023-11-18 12:06:00
>>keepam+Xx
I disagree. I don’t think LLMs are a pathway to AGI. I think LLMs will lead to incredibly powerful game-changing tools and will drive changes that affect the course of humanity, but this technology won’t lead to AGI directly.

I think AGI is going to arrive via a different technology, many years in the future still.

LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

◧◩◪◨
4. keepam+vC[view] [source] 2023-11-18 12:26:00
>>bernie+rz
I'm not saying LLMs are. LLMs are not the only thing going on right now. But they do enable a powerful tool.

I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.

◧◩◪◨⬒
5. SAI_Pe+FA1[view] [source] 2023-11-18 18:10:38
>>keepam+vC
Also continuous learning. The training step is currently separate from the inference step, so new generations have to get trained instead of learning continuously. Of course continuous larningin a chatbot runs into the Microsoft Tay problem where people train it to respond offensively.
◧◩◪◨⬒⬓
6. keepam+LM2[view] [source] 2023-11-19 01:02:25
>>SAI_Pe+FA1
Yeah, evolution multiple generations. Necessary for sure. Things have to die. Otherwise, there’s no risk. without risk, There’s no real motivation to live and without that there’s no emotion no motivation to learn and without that there’s no AGI.
[go to top]