zlacker

[return to "Governance of Superintelligence"]
1. lsy+c9[view] [source] 2023-05-22 18:24:55
>>davidb+(OP)
Nothing makes me think of Altman as a grifter more than his trying to spook uneducated lawmakers with sci-fi notions like "superintelligence" for which there are no plausible mechanisms or natural analogues, and for which the solution is to lobby government build a moat around his business and limit his competitors. We do not even have a consensus around a working definition of "intelligence", let alone any evidence that it is a linear or unbounded phenomenon, and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence. The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality, which is no basis for a wide-ranging government intervention.
◧◩
2. famous+sc[view] [source] 2023-05-22 18:40:07
>>lsy+c9
>and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence.

People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

Else this is just another instance of people struggling to see what's right in front of them.

Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

◧◩◪
3. godels+lD[view] [source] 2023-05-22 21:09:15
>>famous+sc
> Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

You and me both. I mean look at how people attribute the ability to recall with intelligence. Is memory part of intelligence? Yeah. Is it all of it? No. That's why people with eidetic memories are considered the smartest and they're the most successful people.

We have no idea how good systems like GPT and Bard actually are because we have no idea what is in their training data. But we do know that when we can uncover sections that they do really well at what's in there but not so when it isn't. This is generalization, and a big part of intelligence. Unfortunately all we know is that everything is contaminated so we can't clearly measure this, which was already a noisy proxy. We've quietly switched to making the questions on the test identical or nearly identical to those on the homework. That's different than testing novel problems.

And it doesn't help that we have a lot of people that haven't spent significant times in ML speaking about it. People who haven't studied up on cognition. People who haven't studied statistics. An academic degree isn't needed, but the knowledge is. These are people with no expertise. We even see a lot on HN. People that think training a transformer "from scratch" makes them an expert. Maybe because CS people pick up minor domain knowledge quickly, but can't differentiate domain expertise from domain knowledge.

Then in experts we have the tails of the distribution dominating the conversation: over hype and just memorization. We let people dominate the conversation with discussions about how we're going to create superintelligences that are going to (with no justification) enslave humanity and other people saying they're just copy machines. Neither is correct or helpful. It is crying wolf before the wolf comes. If we say a wolf is eating all our sheep when in reality we just see it patrolling the edge of the woods then people won't listen when the wolf does attack. (AI dangers do exist, but long before superintelligence and not "generates disproportionately white faces.")

> "I'll know agi when i see it", my ass.

I don't know any researcher that realistically believes this. The vast majority of us believe we don't have a good definition and it'll be unclear. If we can build it, we'll probably not know it is intelligent at first. Which this being the case, of course we should be expecting pushback. We SHOULD. It's insane to attribute intelligence to something we don't know has intelligence. It's okay to question if it has it though. If we didn't then we'd need a clear definition. If you got a secret consistent one, then please, tell us all. Unless you think that when/if we build an intelligence that it comes into existence already aware of its own sentience. Which would be weird. Can you name a creature that knows its own sentience or consciousness at birth? There's a lot of domain expertise nuance that is ignored because at the surface level it looks trivial. But domain expertise is knowing the depth and nuance of seemingly simple things.

Most of our AI conversations are downright idiotic. The conversation is dominated by grifters, people looking for views and playing on emotion, not experts. The truth is we don't know a lot and it is just a very messy time. Lots of people adding noise in a very noisy environment. Maybe a lot of us should do what the experts tend to do, and just not confidently talk openly in the public about things we don't have the answers to.

[go to top]