zlacker

[parent] [thread] 14 comments
1. cthalu+(OP)[view] [source] 2023-11-20 10:21:12
He's an 'AGI potentially poses an existential threat' guy. He's given his p(doom) as being somewhere between 5 and 50 percent. If the people in charge of potentially making a thing you think could have up to a 50% chance of wiping out humanity are asking you if you can help, you're probably going to offer whatever help you can.
replies(1): >>Random+T4
2. Random+T4[view] [source] 2023-11-20 10:58:29
>>cthalu+(OP)
With his probability estimates that high, he should probably do everything in his power to stop all development.
replies(3): >>xvecto+76 >>RcouF1+l6 >>Zolde+1d
◧◩
3. xvecto+76[view] [source] [discussion] 2023-11-20 11:07:01
>>Random+T4
He has already indicated that he intends to slow development by over 90%.
replies(1): >>Boiled+2c
◧◩
4. RcouF1+l6[view] [source] [discussion] 2023-11-20 11:08:25
>>Random+T4
At that high of a probability of doom, one could argue that the most ethical thing to do is to assassinate everyone involved in AI research. Probability of doom x number of people affected is .05 x 8 billion == 400 million, versus a few thousand AI researchers.

Of course, no one really believes the probability of doom is that high.

replies(3): >>Feepin+Do >>JChara+4q >>Zak+3V
◧◩◪
5. Boiled+2c[view] [source] [discussion] 2023-11-20 11:44:04
>>xvecto+76
Do you have a reference for that? I'd like to read / view it.
replies(1): >>lsicla+Rg
◧◩
6. Zolde+1d[view] [source] [discussion] 2023-11-20 11:51:04
>>Random+T4
I also don't believe in those doom odds, but he does, and he still wants to roll the dice?

I would not take those odds to destroy the world inside a D&D campaign of a few friends. If that is really what they think they are building here...

◧◩◪◨
7. lsicla+Rg[view] [source] [discussion] 2023-11-20 12:19:57
>>Boiled+2c
I believe this is a reference to Emmett’s Sept 16 [1] post on X:

> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.

> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

[1] https://x.com/eshear/status/1703178063306203397?s=20

replies(1): >>rvnx+2q
◧◩◪
8. Feepin+Do[view] [source] [discussion] 2023-11-20 13:08:15
>>RcouF1+l6
No, rather people behave in a virtue morality fashion even if they claim to be utilitarian.
◧◩◪◨⬒
9. rvnx+2q[view] [source] [discussion] 2023-11-20 13:17:15
>>lsicla+Rg
Quite bad news, imagine a CEO trying to slow down his own company instead of pushing it forward :|
replies(2): >>kcb+Sx >>jacque+Tf3
◧◩◪
10. JChara+4q[view] [source] [discussion] 2023-11-20 13:17:27
>>RcouF1+l6
Even if p = 1 people would have moral dilemmas with it because it hasn’t happened yet (think of Minority Report)
◧◩◪◨⬒⬓
11. kcb+Sx[view] [source] [discussion] 2023-11-20 13:49:49
>>rvnx+2q
It's almost like they live in a vacuum where there's not a nation in particular with essentially infinite resources and smart people like them that will immediately capitalize on these delays.
◧◩◪
12. Zak+3V[view] [source] [discussion] 2023-11-20 15:24:53
>>RcouF1+l6
Even with purely utilitarian ethics, that wouldn't be ethical because it wouldn't work.

I'm not an AI researcher, but I know what a neural network is, I've implemented a machine learning algorithm or two, and I can read a CS paper. Once the luddite cult murdering AI researchers was dead or imprisoned, I suspect the demand for mediocre self-taught AI researchers would be increased and I might be motivated to become one.

If you somehow managed to destroy all copies of the best recent research, there are still many people with enough general knowledge of the techniques used who aren't currently working in the field to get things back to the current level of technology in under a decade given a few billion dollars to spend on it. Several of them are probably reading HN.

replies(1): >>kridsd+386
◧◩◪◨⬒⬓
13. jacque+Tf3[view] [source] [discussion] 2023-11-21 02:53:20
>>rvnx+2q
With the rate employees are ditching they may well see that factor of 10% as some upper bound or a goal to strive for.
◧◩◪◨
14. kridsd+386[view] [source] [discussion] 2023-11-21 20:18:44
>>Zak+3V
I think that what you wrote here makes you an AI Researcher.

If you were Iranian and said "I'm not a nuclear physicist, but I do know the math and I have built a small reactor." I would strongly suggest you be on the lookout for Mossad agents.

replies(1): >>Zak+Iaf
◧◩◪◨⬒
15. Zak+Iaf[view] [source] [discussion] 2023-11-24 14:22:04
>>kridsd+386
I suppose that might come down to perspective. A luddite cult that thinks AI needs to be stopped at a cost of killing anyone who might work on it would probably put me on their list, but not very high. Actual AI researchers would not likely consider me an AI researcher.
[go to top]