zlacker

[parent] [thread] 3 comments
1. nradov+(OP)[view] [source] 2023-11-18 06:05:34
That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.

Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.

replies(3): >>dr_dsh+kb1 >>Shamel+Pw1 >>pasaba+706
2. dr_dsh+kb1[view] [source] 2023-11-18 15:25:01
>>nradov+(OP)
What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.

How do you compare Eliza to GPT4?

3. Shamel+Pw1[view] [source] 2023-11-18 17:21:23
>>nradov+(OP)
> The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human.

That is a definition. It is not a generally accepted definition.

4. pasaba+706[view] [source] 2023-11-19 22:29:04
>>nradov+(OP)
Tbh, I always thought the whole stuff about 'intelligence' was just marketing garbage. There are no really good rigorous descriptions of intelligence, so asking if a product exhibits intelligence is basically nonsense. There are two questions about LLMs that are good, though:

1. Are they useful?

2. Are they going to become more useful in the forseeable future?

On 1, I would say, maybe? Like, somewhere between Microsoft Word and Excel? On 2, I would say, sure - an 'AGI' would be tremendously useful. But it's also tremendously unlikely to grow somehow out of the current state of the art. People disagree on that point, but I don't think there are even compelling reasons to believe that LLMs can evolve beyond their current status as bullshit generators.

[go to top]