zlacker

[return to "Introducing Superalignment"]
1. spacem+D22[view] [source] 2023-07-06 04:20:52
>>tim_sw+(OP)
I know everyone keeps mocking the idea of “AGI”, but if the company at the forefront of the field is actually spending money on managing potential AGI, and publicly declaring that it could “happen within this decade”, surely that must wake us out of our complacency?

It’s one thing to philosophize and ruminate, but its another if the soldiers in the trenches start spending valuable resources on it - surely, that must mean the threat is more real than we imagine?

If there is even 1% chance that OpenAI is right, that has enormous ramifications for us as a species. Can we really afford to ignore their claims?

◧◩
2. reduce+ob2[view] [source] 2023-07-06 05:44:44
>>spacem+D22
It's not easy to grapple with exponential takeoffs and the possibility of truly world ending outcomes, even when there are hundreds of nukes pointed at them they've managed to ignore most of their life.
[go to top]