zlacker

[parent] [thread] 0 comments
1. bbor+(OP)[view] [source] 2024-01-08 23:16:21
Ha, well said, point taken. I’d say AI risk is also a technology problem, but without quantifiable models for the relevant risks, it stops sounding like science and starts being interpreted as philosophy. Which is pretty fair.

If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).

EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty

[go to top]