zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. convex+X1[view] [source] 2023-11-18 02:56:12
>>apsec1+(OP)
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩
2. nradov+5e[view] [source] 2023-11-18 04:27:41
>>convex+X1
The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.
◧◩◪
3. anon29+Zr[view] [source] 2023-11-18 06:08:54
>>nradov+5e
> The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.

How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.

It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.

What is AGI if not problem solving in novel domains?

[go to top]