zlacker

[return to "Governance of Superintelligence"]
1. lsy+c9[view] [source] 2023-05-22 18:24:55
>>davidb+(OP)
Nothing makes me think of Altman as a grifter more than his trying to spook uneducated lawmakers with sci-fi notions like "superintelligence" for which there are no plausible mechanisms or natural analogues, and for which the solution is to lobby government build a moat around his business and limit his competitors. We do not even have a consensus around a working definition of "intelligence", let alone any evidence that it is a linear or unbounded phenomenon, and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence. The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality, which is no basis for a wide-ranging government intervention.
◧◩
2. famous+sc[view] [source] 2023-05-22 18:40:07
>>lsy+c9
>and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence.

People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

Else this is just another instance of people struggling to see what's right in front of them.

Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

◧◩◪
3. Anthon+Vk[view] [source] 2023-05-22 19:29:24
>>famous+sc
> People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

"Average human level" is pretty boring though. Computers have been doing arithmetic at well above "average human level" since they were first invented. The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

◧◩◪◨
4. godels+3F[view] [source] 2023-05-22 21:18:59
>>Anthon+Vk
> "Average human level" is pretty boring though.

It seems you have the wrong idea of what is being conveyed, or what average human intelligence is. It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize. There's a reason humans have occupied every continent of the earth and even areas outside. It's far more than being able to do arithmetic or playing chess. This just all seems unimpressive to us because it is normal, to us. But this certainly isn't normal if we look outside ourselves. Yes, there's intelligence in many lifeforms, even ants, but there is some ineffable or difficult to express uniqueness to human intelligence (specifically in its generality) that is being referenced here.

To put it one way, a group of machines that could think at the level of an average teenager (or even lower) but able to do so 100x faster would probably outmatch a group of human scientists in being able to solve complex and novel math problems. This isn't "average human level" but below. "Average human level" is just a shortcut term for this ineffable description of the _capacity_ to generalize and adapt so well. Because we don't even have a fucking definition of intelligence.

◧◩◪◨⬒
5. Anthon+nK[view] [source] 2023-05-22 21:54:11
>>godels+3F
> It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize.

But this is exactly why average is boring.

If you ask ChatGPT what it's like to be in the US Navy, it will have texts written by Navy sailors in its training data and produce something based on those texts in response to related questions.

If you ask the average person what it's like to be in the US Navy, they haven't been in the Navy, may not know anyone who is, haven't taken any time to research it, so their answers will be poor. ChatGPT could plausibly give a better response.

But if you ask the questions of someone who has, they'll answer related questions better than ChatGPT. Even if the average person who has been in the Navy has no greater intelligence than the average person who hasn't.

It's not better at reasoning. It's barely even capable of it, but has access to training data that the average person lacks.

◧◩◪◨⬒⬓
6. godels+CQ[view] [source] 2023-05-22 22:37:39
>>Anthon+nK
> But this is exactly why average is boring.

Honestly, I think it is the lens. Personally I find it absolutely amazing. It's this incredibly complex thing that we've been trying to describe for thousands of years but have completely failed to (we've gotten better of course). It's this thing that is right in front of that looks simple but because not many try to peak behind the curtain. Looking behind there is like trying to describe a Lovecraftian monster. But this is all in plain sight. That's pretty crazy imo. But hey, dig down the rabbit hole of any subject and you'll find this complex world. Most things are like collage. From far away their shape looks clear and precise but on close inspection you find that each tile itself is another beautiful piece. This is true even for seemingly simple things, and honestly I think that's even more beautiful. This complex and chaotic world is all around us but we take it for granted. Being boring comes down to a choice.

> If you ask the average person what it's like to be in the US Navy, ChatGPT could plausibly give a better response.

There's also a bias. Does a human know the instructions are a creative exercise? It is hard to measure because what you'd need to prompt a human with is "Supposing you were a conman trying to convince me you were in the Navy, how would you describe what it was like?" Because the average human response is going to default to not lying and fabricating things. You also need to remember that your interpretation is (assuming you aren't/weren't in the Navy) is as someone hearing a story rather than aligning that story to lived experiences. You'd need to compare the average human making up a story to GPT, not an average human's response.

> It's barely even capable of it, but has access to training data that the average person lacks.

I do agree that GPT is great as a pseudo and noisy library. I find that as a wonderful and very useful tool. I often forget specific words used to describe certain concepts. This is hard to google. GPT finds them pretty well or returns something close enough that I can do a quick iterative prompt and find the desired term. Much faster than when I used to do this by googling. But yeah, I think we both agree that GPT is by no means sentient and likely not intelligent (ill-defined and defined differently for different people). But we can find many things and different things interesting. My main point is I wanted to explain why I find intelligence so fascinating. Hell, it is a major part of why I got into ML research in the first place (Asimov probably helped a lot too).

[go to top]