zlacker

[parent] [thread] 3 comments
1. lainga+(OP)[view] [source] 2024-01-08 22:02:27
A climate forcing has a physical effect on the Earth system that you can model with primitive equations. It is not a social or economic problem (although removing the forcing is).

You might as well roll a ball down an incline and then ask me whether Keynes was right.

replies(3): >>HPMOR+1d >>bbor+Zf >>staunt+Qh
2. HPMOR+1d[view] [source] 2024-01-08 23:01:11
>>lainga+(OP)
Wait I gotta defend my boy Keynes here. His predictions have been as nearly as well validated as predicting the outcome of a ball rolling down a plank. Just reading the first part of the General Theory correctly predicted the labor strikes in 2023. Keynes’ very clear predictions continue to hold up under empirical observation.
3. bbor+Zf[view] [source] 2024-01-08 23:16:21
>>lainga+(OP)
Ha, well said, point taken. I’d say AI risk is also a technology problem, but without quantifiable models for the relevant risks, it stops sounding like science and starts being interpreted as philosophy. Which is pretty fair.

If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).

EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty

4. staunt+Qh[view] [source] 2024-01-08 23:25:11
>>lainga+(OP)
I would think any scenario where humans actually go extinct (as opposed to just civilization collapsing and the population plummeting, which would be terrible enough) has to involve a lot of social and economic modeling...
[go to top]