zlacker

[parent] [thread] 1 comments
1. threes+(OP)[view] [source] 2024-10-20 03:37:53
> It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance

To some extent this is true.

To calculate A + B you could for example generate A, B for trillions of combinations and encode that within the network. And it would calculate this faster than any human could.

But that's not intelligence. And Apple's research showed that LLMs are simply inferring relationships based on the tokens it has access to. Which you can throw off by adding useless information or trying to abstract A + B.

replies(1): >>Dylan1+Q7
2. Dylan1+Q7[view] [source] 2024-10-20 05:46:00
>>threes+(OP)
> To calculate A + B you could for example generate A, B for trillions of combinations and encode that within the network. And it would calculate this faster than any human could.

I don't feel like this is a very meaningful argument because if you can do that generation then you must already have a superhuman machine for that task.

[go to top]