zlacker

[parent] [thread] 11 comments
1. bbor+(OP)[view] [source] 2024-01-08 21:58:40
Very interesting, especially the huge jump forward in the first figure and a possible majority of AI researchers giving >10% to the Human Extinction outcome.

To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?

replies(6): >>lainga+11 >>michae+i3 >>kranke+K4 >>blames+M5 >>chasd0+he >>staunt+Al
2. lainga+11[view] [source] 2024-01-08 22:02:27
>>bbor+(OP)
A climate forcing has a physical effect on the Earth system that you can model with primitive equations. It is not a social or economic problem (although removing the forcing is).

You might as well roll a ball down an incline and then ask me whether Keynes was right.

replies(3): >>HPMOR+2e >>bbor+0h >>staunt+Ri
3. michae+i3[view] [source] 2024-01-08 22:13:31
>>bbor+(OP)
We have extremely detailed and well-tested models of climate. It's worth reading the IPCC report - it's extremely interesting, and quite accessible. I was somewhat skeptical of climate work before I began reading, but I spent hundreds of hours understanding it, and was quite impressed by the depth of the work. By contrast, our models of future AI are very weak. Something like the scaling laws paper or the Chinchilla paper are far less convincing than the best climate work. And arguments like those in Nick Bostrom or Stuart Russell's books are much more conjectural and qualitative (& less well-tested) than the climate argument

I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.

replies(1): >>bbor+dg
4. kranke+K4[view] [source] 2024-01-08 22:19:08
>>bbor+(OP)
Because the AI human extinction idea is entirely conjecture while climate change is just a progression on current models.

What’s the progression that leads to AI human extinction ?

5. blames+M5[view] [source] 2024-01-08 22:23:44
>>bbor+(OP)
Profit motive
◧◩
6. HPMOR+2e[view] [source] [discussion] 2024-01-08 23:01:11
>>lainga+11
Wait I gotta defend my boy Keynes here. His predictions have been as nearly as well validated as predicting the outcome of a ball rolling down a plank. Just reading the first part of the General Theory correctly predicted the labor strikes in 2023. Keynes’ very clear predictions continue to hold up under empirical observation.
7. chasd0+he[view] [source] 2024-01-08 23:02:16
>>bbor+(OP)
There's also the possibility of a AGI being benevolent or, what i find more likely, suicidal.
replies(1): >>Vecr+pG
◧◩
8. bbor+dg[view] [source] [discussion] 2024-01-08 23:12:03
>>michae+i3
Amazing response, thanks for taking the time - concise, clear, and I don’t think I’ll be using that comparison again because of you. I see now how much more convincing mathematical models are than philosophical arguments in this context, and why that allows modern climate-change-believing scientists to dismiss this (potential, very weak, uncertain) cogsci consensus.

In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.

FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…

◧◩
9. bbor+0h[view] [source] [discussion] 2024-01-08 23:16:21
>>lainga+11
Ha, well said, point taken. I’d say AI risk is also a technology problem, but without quantifiable models for the relevant risks, it stops sounding like science and starts being interpreted as philosophy. Which is pretty fair.

If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).

EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty

◧◩
10. staunt+Ri[view] [source] [discussion] 2024-01-08 23:25:11
>>lainga+11
I would think any scenario where humans actually go extinct (as opposed to just civilization collapsing and the population plummeting, which would be terrible enough) has to involve a lot of social and economic modeling...
11. staunt+Al[view] [source] 2024-01-08 23:39:56
>>bbor+(OP)
I think human extinction due to climate change is extremely unlikely. However, civilization collapsing is bad enough. We can't be certain whether or not that will actually happen but we do know it will if we do nothing. We even have a pretty good idea when, that's not too late yet, and we have an actionable scientific consensus about what to do about it.

In many ways AI risk looks like the opposite. It might actually cause extinction but we have no idea how likely that is and neither do we have any idea how likely any bad not-quite-extinction outcome is. The outcome might even be very positive. We have no idea when anything will happen and the only realistic plan that's sure to avoid the bad outcome is to stop building AI, which also means we don't get the potential good outcome, and there's no scientific consensus about that (or anything else) being a good plan because it's almost impossible to gather concrete empirical evidence about the risk. By the time such evidence is available, it might be too late (this could also have happened with climate change, we got lucky there...)

◧◩
12. Vecr+pG[view] [source] [discussion] 2024-01-09 02:19:00
>>chasd0+he
Just hope it won't try to take us with it (see Dark Star's "smart bomb").
[go to top]