zlacker

[parent] [thread] 1 comments
1. toomuc+(OP)[view] [source] 2023-05-22 19:37:48
> The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

I imagine an important concern is the learning & improvement velocity. Humans get old, tired, etc. GPUs do not. It isn't the case now, but it is fuzzy how fast we could collectively get there. Break out problem domains into modules, off to the silicon dojos until your models exceed human capabilities, and then roll them up. You can pick from OpenGPT plugins, why wouldn't an LLM hypervisor/orchestrator do the same?

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

replies(1): >>dragon+P
2. dragon+P[view] [source] 2023-05-22 19:41:46
>>toomuc+(OP)
> The concern is the learning & improvement velocity. Humans get old, tired, etc. GPUs do not.

They do, though.

Of course, replacing the worn out hardware while keeping the software is easier with GPUs.

[go to top]