zlacker

[parent] [thread] 1 comments
1. Aperoc+(OP)[view] [source] 2024-05-17 18:29:03
The premise is that LLMs are a path to AGI.

I'm not convinced, you can throw all the compute (btw, it's not growing exponentially any more, we have arrived at atom scale) at it and I'm not convinced this will lead to AGI.

Our rudimentary, underpowered brain is GI, now you're telling me stacking more GPU bricks will lead to AGI? If it indeed does, it would have came by now.

replies(1): >>ben_w+im
2. ben_w+im[view] [source] 2024-05-17 21:11:19
>>Aperoc+(OP)
Our brains have far more discrete compute elements in them than even a chip, even though our wetware elements just run much slower than our silicon it doesn't tell us much about which is the more "powerful" — the overall compute throughput of the brain is unclear, with estimates varying by many orders of magnitude.

I also don't expect LLMs to be the final word on AI architecture, as they need so many more examples compared to organic brains to get anything done. But the hardware… that might just be a red herring.

[go to top]