zlacker

[return to "Governance of Superintelligence"]
1. lsy+c9[view] [source] 2023-05-22 18:24:55
>>davidb+(OP)
Nothing makes me think of Altman as a grifter more than his trying to spook uneducated lawmakers with sci-fi notions like "superintelligence" for which there are no plausible mechanisms or natural analogues, and for which the solution is to lobby government build a moat around his business and limit his competitors. We do not even have a consensus around a working definition of "intelligence", let alone any evidence that it is a linear or unbounded phenomenon, and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence. The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality, which is no basis for a wide-ranging government intervention.
◧◩
2. famous+sc[view] [source] 2023-05-22 18:40:07
>>lsy+c9
>and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence.

People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

Else this is just another instance of people struggling to see what's right in front of them.

Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

◧◩◪
3. Anthon+Vk[view] [source] 2023-05-22 19:29:24
>>famous+sc
> People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

"Average human level" is pretty boring though. Computers have been doing arithmetic at well above "average human level" since they were first invented. The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

◧◩◪◨
4. godels+3F[view] [source] 2023-05-22 21:18:59
>>Anthon+Vk
> "Average human level" is pretty boring though.

It seems you have the wrong idea of what is being conveyed, or what average human intelligence is. It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize. There's a reason humans have occupied every continent of the earth and even areas outside. It's far more than being able to do arithmetic or playing chess. This just all seems unimpressive to us because it is normal, to us. But this certainly isn't normal if we look outside ourselves. Yes, there's intelligence in many lifeforms, even ants, but there is some ineffable or difficult to express uniqueness to human intelligence (specifically in its generality) that is being referenced here.

To put it one way, a group of machines that could think at the level of an average teenager (or even lower) but able to do so 100x faster would probably outmatch a group of human scientists in being able to solve complex and novel math problems. This isn't "average human level" but below. "Average human level" is just a shortcut term for this ineffable description of the _capacity_ to generalize and adapt so well. Because we don't even have a fucking definition of intelligence.

◧◩◪◨⬒
5. Anthon+nK[view] [source] 2023-05-22 21:54:11
>>godels+3F
> It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize.

But this is exactly why average is boring.

If you ask ChatGPT what it's like to be in the US Navy, it will have texts written by Navy sailors in its training data and produce something based on those texts in response to related questions.

If you ask the average person what it's like to be in the US Navy, they haven't been in the Navy, may not know anyone who is, haven't taken any time to research it, so their answers will be poor. ChatGPT could plausibly give a better response.

But if you ask the questions of someone who has, they'll answer related questions better than ChatGPT. Even if the average person who has been in the Navy has no greater intelligence than the average person who hasn't.

It's not better at reasoning. It's barely even capable of it, but has access to training data that the average person lacks.

[go to top]