zlacker

[parent] [thread] 9 comments
1. nradov+(OP)[view] [source] 2023-11-18 04:27:41
The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.
replies(4): >>menset+G >>MVisse+53 >>dr_dsh+Yc >>anon29+Ud
2. menset+G[view] [source] 2023-11-18 04:32:48
>>nradov+(OP)
It’s impossible to predict.

No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.

3. MVisse+53[view] [source] 2023-11-18 04:47:37
>>nradov+(OP)
No-one knows, which makes this a classical scientific problem. Which is what Ilya wants to focus on, which I think is fair, give this alligns with the original mission of OpenAi.

I think it’s also fair Sam starts something new with a for profit focus of the get-go.

4. dr_dsh+Yc[view] [source] 2023-11-18 05:58:41
>>nradov+(OP)
AGI is about definitions. By many definitions, it’s already here. Hence MSR’s “sparks of AGI” paper and Eric Schmidt’s article in Noema. But by the definition “as good or better than humans at all things”, it fails.
replies(2): >>nradov+zd >>dr_dsh+Wq3
◧◩
5. nradov+zd[view] [source] [discussion] 2023-11-18 06:05:34
>>dr_dsh+Yc
That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.

Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.

replies(3): >>dr_dsh+To1 >>Shamel+oK1 >>pasaba+Gd6
6. anon29+Ud[view] [source] 2023-11-18 06:08:54
>>nradov+(OP)
> The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.

How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.

It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.

What is AGI if not problem solving in novel domains?

◧◩◪
7. dr_dsh+To1[view] [source] [discussion] 2023-11-18 15:25:01
>>nradov+zd
What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.

How do you compare Eliza to GPT4?

◧◩◪
8. Shamel+oK1[view] [source] [discussion] 2023-11-18 17:21:23
>>nradov+zd
> The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human.

That is a definition. It is not a generally accepted definition.

◧◩
9. dr_dsh+Wq3[view] [source] [discussion] 2023-11-19 03:21:16
>>dr_dsh+Yc
https://arxiv.org/abs/2311.02462

On operationalizing definitions of AGI

◧◩◪
10. pasaba+Gd6[view] [source] [discussion] 2023-11-19 22:29:04
>>nradov+zd
Tbh, I always thought the whole stuff about 'intelligence' was just marketing garbage. There are no really good rigorous descriptions of intelligence, so asking if a product exhibits intelligence is basically nonsense. There are two questions about LLMs that are good, though:

1. Are they useful?

2. Are they going to become more useful in the forseeable future?

On 1, I would say, maybe? Like, somewhere between Microsoft Word and Excel? On 2, I would say, sure - an 'AGI' would be tremendously useful. But it's also tremendously unlikely to grow somehow out of the current state of the art. People disagree on that point, but I don't think there are even compelling reasons to believe that LLMs can evolve beyond their current status as bullshit generators.

[go to top]