zlacker

[return to "Governance of Superintelligence"]
1. lsy+c9[view] [source] 2023-05-22 18:24:55
>>davidb+(OP)
Nothing makes me think of Altman as a grifter more than his trying to spook uneducated lawmakers with sci-fi notions like "superintelligence" for which there are no plausible mechanisms or natural analogues, and for which the solution is to lobby government build a moat around his business and limit his competitors. We do not even have a consensus around a working definition of "intelligence", let alone any evidence that it is a linear or unbounded phenomenon, and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence. The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality, which is no basis for a wide-ranging government intervention.
◧◩
2. famous+sc[view] [source] 2023-05-22 18:40:07
>>lsy+c9
>and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence.

People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

Else this is just another instance of people struggling to see what's right in front of them.

Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

◧◩◪
3. Anthon+Vk[view] [source] 2023-05-22 19:29:24
>>famous+sc
> People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

"Average human level" is pretty boring though. Computers have been doing arithmetic at well above "average human level" since they were first invented. The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

◧◩◪◨
4. toomuc+pm[view] [source] 2023-05-22 19:37:48
>>Anthon+Vk
> The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

I imagine an important concern is the learning & improvement velocity. Humans get old, tired, etc. GPUs do not. It isn't the case now, but it is fuzzy how fast we could collectively get there. Break out problem domains into modules, off to the silicon dojos until your models exceed human capabilities, and then roll them up. You can pick from OpenGPT plugins, why wouldn't an LLM hypervisor/orchestrator do the same?

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

[go to top]