zlacker

[parent] [thread] 7 comments
1. aantix+(OP)[view] [source] 2024-05-17 17:42:12
He's convinced that AGI is an eventuality.

His call for preparation makes it sound like it's near.

replies(2): >>tempsy+G >>MVisse+H
2. tempsy+G[view] [source] 2024-05-17 17:46:20
>>aantix+(OP)
Hard to imagine. But if you actually believe that there's probably a bunch of ways to make millions in public markets betting on that outcome.
replies(1): >>MVisse+U
3. MVisse+H[view] [source] 2024-05-17 17:46:44
>>aantix+(OP)
Probably within 5 years. Compute growing exponentially, algorithm improvement as well, multi modalities allows for different type of data training, etc…

Yeah, this shit is near. Also- Quite a dangerous experiment we’re running. And safety-first people are not at the helm anymore.

replies(1): >>Aperoc+I7
◧◩
4. MVisse+U[view] [source] [discussion] 2024-05-17 17:48:04
>>tempsy+G
How? Interesting co’s are private (openai, groq, etc).

Nvidia already huge. Microsoft and Apple are more users.

replies(2): >>tempsy+Z2 >>ZiiS+fb
◧◩◪
5. tempsy+Z2[view] [source] [discussion] 2024-05-17 18:03:19
>>MVisse+U
so you think if they came out and said we have created AGI there would be no changes in stock prices to any stock that has anything to do with AI even if they are already "huge"?

people said NVDA and FAANG was huge 5 years ago.

◧◩
6. Aperoc+I7[view] [source] [discussion] 2024-05-17 18:29:03
>>MVisse+H
The premise is that LLMs are a path to AGI.

I'm not convinced, you can throw all the compute (btw, it's not growing exponentially any more, we have arrived at atom scale) at it and I'm not convinced this will lead to AGI.

Our rudimentary, underpowered brain is GI, now you're telling me stacking more GPU bricks will lead to AGI? If it indeed does, it would have came by now.

replies(1): >>ben_w+0u
◧◩◪
7. ZiiS+fb[view] [source] [discussion] 2024-05-17 18:52:28
>>MVisse+U
If it economically better to hire AI then humans for most jobs, and they only need Nvidia hardare and electricity then today's Nvidia are tiny, real tiny.
◧◩◪
8. ben_w+0u[view] [source] [discussion] 2024-05-17 21:11:19
>>Aperoc+I7
Our brains have far more discrete compute elements in them than even a chip, even though our wetware elements just run much slower than our silicon it doesn't tell us much about which is the more "powerful" — the overall compute throughput of the brain is unclear, with estimates varying by many orders of magnitude.

I also don't expect LLMs to be the final word on AI architecture, as they need so many more examples compared to organic brains to get anything done. But the hardware… that might just be a red herring.

[go to top]