zlacker

[parent] [thread] 0 comments
1. cr4zy+(OP)[view] [source] 2023-07-05 18:44:07
Allocating 20% to safety would not be enough if safety and capability aren't aligned. I.e. without saying Bostrom's orthogonality thesis is mostly wrong. However, I believe they may be sufficiently aligned in the long term for 20% to work [1]. The biggest threat imo is that more resources are devoted to AIs with military or monetary-based objectives that are focused on shorter-term capability and power. In this case, capability and safety are not aligned and we race to the bottom. Hopefully global coordination and this effort to achieve superalignment in four years will avoid that.

[1] https://drive.google.com/file/d/1rdG5QCTqSXNaJZrYMxO9x2ChsPB...

[go to top]