zlacker

[parent] [thread] 24 comments
1. arisAl+(OP)[view] [source] 2023-11-18 23:31:40
You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?
replies(5): >>jonath+k3 >>Apocry+E3 >>objekt+E4 >>smegge+e5 >>adastr+hd
2. jonath+k3[view] [source] 2023-11-18 23:49:49
>>arisAl+(OP)
Yes.

Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.

replies(1): >>femiag+D3
◧◩
3. femiag+D3[view] [source] [discussion] 2023-11-18 23:51:00
>>jonath+k3
Oh for sure.

https://en.wikipedia.org/wiki/Manhattan_Project

replies(1): >>jonath+95
4. Apocry+E3[view] [source] 2023-11-18 23:51:03
>>arisAl+(OP)
The Manhattan Project physicists once feared setting the atmosphere on fire. Scientific paradigms progress with time.
replies(1): >>cthalu+o9
5. objekt+E4[view] [source] 2023-11-18 23:55:16
>>arisAl+(OP)
Yeah kind of like how we as US ask developing countries to reduce carbon emissions.
◧◩◪
6. jonath+95[view] [source] [discussion] 2023-11-18 23:57:13
>>femiag+D3
Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Write a thought. You’re not clever enough for a drive by gotcha

replies(1): >>femiag+k8
7. smegge+e5[view] [source] 2023-11-18 23:57:21
>>arisAl+(OP)
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
replies(3): >>IanCal+l9 >>Davidz+PW >>arisAl+S91
◧◩◪◨
8. femiag+k8[view] [source] [discussion] 2023-11-19 00:12:19
>>jonath+95
> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.

Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.

replies(1): >>qwytw+qy
◧◩
9. IanCal+l9[view] [source] [discussion] 2023-11-19 00:17:36
>>smegge+e5
> we dont make tools to hurt us

We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.

> they will be aligned with us because they designed such that their motivation will be to serve us.

They won't hurt us, all we asked for is paperclips.

The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.

replies(1): >>smegge+8v1
◧◩
10. cthalu+o9[view] [source] [discussion] 2023-11-19 00:17:59
>>Apocry+E3
This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'

Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.

The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.

11. adastr+hd[view] [source] 2023-11-19 00:42:56
>>arisAl+(OP)
Not all, or even arguably most AI researchers subscribe to The Big Scary Idea.
replies(1): >>arisAl+3a1
◧◩◪◨⬒
12. qwytw+qy[view] [source] [discussion] 2023-11-19 03:00:48
>>femiag+k8
> The push for the bomb was an international arms race — America against Russia

Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.

replies(1): >>femiag+eD
◧◩◪◨⬒⬓
13. femiag+eD[view] [source] [discussion] 2023-11-19 03:36:22
>>qwytw+qy
You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.
replies(1): >>qwytw+qt1
◧◩
14. Davidz+PW[view] [source] [discussion] 2023-11-19 06:23:09
>>smegge+e5
Can a superintelligence ever be merely a tool?
replies(1): >>smegge+xs1
◧◩
15. arisAl+S91[view] [source] [discussion] 2023-11-19 08:39:23
>>smegge+e5
You probably never read I robot from Asimov?
replies(1): >>smegge+8q1
◧◩
16. arisAl+3a1[view] [source] [discussion] 2023-11-19 08:40:39
>>adastr+hd
Actually the majority of the VA top current. That is Ilya, hassabis, anthropic, Bengio, Hinton. 3 top labs? 3 same views.
◧◩◪
17. smegge+8q1[view] [source] [discussion] 2023-11-19 11:11:01
>>arisAl+S91
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
replies(1): >>arisAl+Ka2
◧◩◪
18. smegge+xs1[view] [source] [discussion] 2023-11-19 11:38:18
>>Davidz+PW
If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.
◧◩◪◨⬒⬓⬔
19. qwytw+qt1[view] [source] [discussion] 2023-11-19 11:46:03
>>femiag+eD
Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?
replies(1): >>femiag+wRa
◧◩◪
20. smegge+8v1[view] [source] [discussion] 2023-11-19 12:01:42
>>IanCal+l9
Honestly we already have paperclip maximizers they are called corporations. Instead of paperclips they are maximizing for shortterm shareholder value.
◧◩◪◨
21. arisAl+Ka2[view] [source] [discussion] 2023-11-19 16:46:09
>>smegge+8q1
and what you got from the I Robot stories is that there is zero probability of danger? Fascinating
replies(1): >>smegge+au2
◧◩◪◨⬒
22. smegge+au2[view] [source] [discussion] 2023-11-19 18:07:13
>>arisAl+Ka2
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.

replies(1): >>arisAl+HN4
◧◩◪◨⬒⬓
23. arisAl+HN4[view] [source] [discussion] 2023-11-20 08:19:08
>>smegge+au2
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
replies(1): >>smegge+IQ7
◧◩◪◨⬒⬓⬔
24. smegge+IQ7[view] [source] [discussion] 2023-11-20 23:06:33
>>arisAl+HN4
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
◧◩◪◨⬒⬓⬔⧯
25. femiag+wRa[view] [source] [discussion] 2023-11-21 18:26:57
>>qwytw+qt1
A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.
[go to top]