zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. lm2846+qd[view] [source] 2023-05-16 12:33:15
>>srslac+I7
100% this, I don't get how even on this website people are so clueless.

Give them a semi human sounding puppet and they think skynet is coming tomorrow.

If we learned anything from the past few months is how gullible people are, wishful thinking is a hell of a drug

◧◩◪
3. digbyb+He[view] [source] 2023-05-16 12:40:26
>>lm2846+qd
I’m open minded about this, I see people more knowledgeable than me on both sides of the argument. Can someone explain how Geoffrey Hinton can be considered to be clueless?
◧◩◪◨
4. srslac+9h[view] [source] 2023-05-16 12:53:48
>>digbyb+He
Hinton, in his own words, asked PaLM to explain a dad joke he had supposedly come up with and was so convinced that his clever and advanced joke would take a lifetime of experience to understand, despite PaLM perfectly articulating why the joke was funny, he quit Google and is, conveniently, still going to continue working on AI, despite the "risks." Not exactly the best example.
◧◩◪◨⬒
5. digbyb+Qi[view] [source] 2023-05-16 13:02:26
>>srslac+9h
Hinton said that the ability to explain a joke was among the first things that made him reassess their capabilities. Not the only thing. You make it sound as though Hinton is obviously clueless yet there are few people with deeper knowledge and more experience working with neural networks. People told him he was crazy for thinking neural networks could do anything useful, now it seems people are calling his crazy for the reverse. I’m genuinely confused about this.
◧◩◪◨⬒⬓
6. reveli+Ew[view] [source] 2023-05-16 14:11:59
>>digbyb+Qi
Not clueless, but unfortunately engaging in motivated reasoning.

Google spent years doing nothing much with its AI because its employees (like Hinton) got themselves locked in an elitist hard-left purity spiral in which they convinced each other that if plebby ordinary non-Googlers could use AI they would do terrible things, like draw pictures of non-diverse people. That's why they never launched Imagen and left the whole generative art space to OpenAI, Stability and Midjourney.

Now the tech finally leaked out of their ivory tower and AI progress is no longer where he was at, but Hinton finds himself at retirement age and no longer feeling much like hard-core product development. What to do? Lucky lucky, he lives in a world where the legacy media laps up any academic with a doomsday story. So he quits and starts enjoying the life of a celebrity public intellectual, being praised as a man of superior foresight and care for the world to those awful hoi polloi shipping products and irresponsibly not voting for Biden (see the last sentence of his Wired interview). If nothing happens and the boy cried wolf then nobody will mind, it'll all be forgotten. If there's any way what happens can be twisted into interpreting reality as AI being bad though, he's suddenly the man of the hour with Presidents and Prime Ministers queuing up to ask him what to do.

It's all really quite pathetic. Academic credentials are worth nothing with respect to such claims and Hinton hasn't yet managed to articulate how, exactly, AI doom is supposed to happen. But our society doesn't penalize wrongness when it comes from such types, not even a tiny bit, so it's a cost-free move for him.

◧◩◪◨⬒⬓⬔
7. digbyb+TE[view] [source] 2023-05-16 14:49:56
>>reveli+Ew
I actually do hope you're right. I've been looking forward to an AI future my whole life and would prefer to not now be worrying about existential risk. It reminds me of when people started talking about how the LHC might create a blackhole and swallow the earth. But I have more confidence in the theories that convinced people it was nearly impossible to occur than what we're seeing now.

Everyone engages in motivated reasoning. The psychoanalysis you provide for Hinton could easily be spun in the opposite direction: a man who spent his entire adult life and will go down in history as "the godfather of" neural networks surely would prefer for that to have been a good thing. Which would then give him even more credibility. But these are just stories we tell about people. It's the arguments we should be focused on.

I don't think "how AI doom is supposed to happen" is all that big of a mystery. The question is simply: "is an intelligence explosion possible"? If the answer is no, then OK, let's move on. If the answer is "maybe", then all the chatter about AI alignment and safety should be taken seriously, because it's very difficult to know how safe a super intelligence would be.

◧◩◪◨⬒⬓⬔⧯
8. reveli+rZ[view] [source] 2023-05-16 16:14:07
>>digbyb+TE
> surely would prefer for that to have been a good thing. Which would then give him even more credibility

Why? Both directions would be motivated reasoning without credibility. Credibility comes from plausible articulations of how such an outcome would be likely to happen, which is lacking here. An "intelligence explosion" isn't something plausible or concrete that can be debated, it's essentially a religious concept.

◧◩◪◨⬒⬓⬔⧯▣
9. digbyb+Yi1[view] [source] 2023-05-16 17:36:23
>>reveli+rZ
The argument is: "we are intelligent and seem to be able to build new intelligences of a certain kind. If we are able to build a new intelligence that itself is able to self improve, and having improved be able to improve further, than an intelligence explosion is possible." That may or not be fallacious reasoning but I don't see how it's religious. As far as I can tell, the religious perspective would be the one that believes that there's something fundamentally special about the human brain so that it cannot be simulated.
[go to top]