zlacker

[parent] [thread] 0 comments
1. fooop+(OP)[view] [source] 2023-11-22 16:23:15
Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.

What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.

Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.

[go to top]