zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. bbor+D6[view] [source] 2024-01-08 21:58:40
>>treebr+(OP)
Very interesting, especially the huge jump forward in the first figure and a possible majority of AI researchers giving >10% to the Human Extinction outcome.

To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?

◧◩
2. lainga+E7[view] [source] 2024-01-08 22:02:27
>>bbor+D6
A climate forcing has a physical effect on the Earth system that you can model with primitive equations. It is not a social or economic problem (although removing the forcing is).

You might as well roll a ball down an incline and then ask me whether Keynes was right.

◧◩◪
3. bbor+Dn[view] [source] 2024-01-08 23:16:21
>>lainga+E7
Ha, well said, point taken. I’d say AI risk is also a technology problem, but without quantifiable models for the relevant risks, it stops sounding like science and starts being interpreted as philosophy. Which is pretty fair.

If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).

EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty

[go to top]