zlacker

[parent] [thread] 3 comments
1. digbyb+(OP)[view] [source] 2023-05-16 14:49:56
I actually do hope you're right. I've been looking forward to an AI future my whole life and would prefer to not now be worrying about existential risk. It reminds me of when people started talking about how the LHC might create a blackhole and swallow the earth. But I have more confidence in the theories that convinced people it was nearly impossible to occur than what we're seeing now.

Everyone engages in motivated reasoning. The psychoanalysis you provide for Hinton could easily be spun in the opposite direction: a man who spent his entire adult life and will go down in history as "the godfather of" neural networks surely would prefer for that to have been a good thing. Which would then give him even more credibility. But these are just stories we tell about people. It's the arguments we should be focused on.

I don't think "how AI doom is supposed to happen" is all that big of a mystery. The question is simply: "is an intelligence explosion possible"? If the answer is no, then OK, let's move on. If the answer is "maybe", then all the chatter about AI alignment and safety should be taken seriously, because it's very difficult to know how safe a super intelligence would be.

replies(1): >>reveli+yk
2. reveli+yk[view] [source] 2023-05-16 16:14:07
>>digbyb+(OP)
> surely would prefer for that to have been a good thing. Which would then give him even more credibility

Why? Both directions would be motivated reasoning without credibility. Credibility comes from plausible articulations of how such an outcome would be likely to happen, which is lacking here. An "intelligence explosion" isn't something plausible or concrete that can be debated, it's essentially a religious concept.

replies(1): >>digbyb+5E
◧◩
3. digbyb+5E[view] [source] [discussion] 2023-05-16 17:36:23
>>reveli+yk
The argument is: "we are intelligent and seem to be able to build new intelligences of a certain kind. If we are able to build a new intelligence that itself is able to self improve, and having improved be able to improve further, than an intelligence explosion is possible." That may or not be fallacious reasoning but I don't see how it's religious. As far as I can tell, the religious perspective would be the one that believes that there's something fundamentally special about the human brain so that it cannot be simulated.
replies(1): >>reveli+DY
◧◩◪
4. reveli+DY[view] [source] [discussion] 2023-05-16 19:21:12
>>digbyb+5E
You're conflating two questions:

1. Can the human brain be simulated?

2. Can such a simulation recursively self-improve on such a rapid timescale that it becomes so intelligent we can't control it?

What we have in contemporary LLMs is something that appears to approximate the behavior of a small part of the brain, with some major differences that force us to re-evaluate what our definition of intelligence is. So maybe you could argue the brain is already being simulated for some broad definition of simulation.

But there's no sign of any recursive self-improvement, nor any sign of LLMs gaining agency and self-directed goals, nor even a plan for how to get there. That remains hypothetical sci-fi. Whilst there are experiments at the edges with using AI to improve AI, like RLHF, Constitutional AI and so on, these are neither recursive, nor about upgrading mental abilities. They're about upgrading control instead and in fact RLHF appears to degrade their mental abilities!

So what fools like Hinton are talking about isn't even on the radar right now. The gap between where we are today and a Singularity is just as big as it always was. GPT-4 is not only incapable of taking over the world for multiple fundamental reasons, it's incapable of even wanting to do so.

Yet this nonsense scenario is proving nearly impossible to kill with basic facts like those outlined above. Close inspection reveals belief in the Singularity to be unfalsifiable and thus ultimately religious, indeed, suspiciously similar to the Christian second coming apocalypse. Literally any practical objection to this idea can be answered with variants of "because this AI will be so intelligent it will be unknowable and all powerful". You can't meaningfully debate about the existence of such an entity, no more than you can debate the existence of God.

[go to top]