zlacker

[parent] [thread] 2 comments
1. wg0+(OP)[view] [source] 2023-11-20 05:21:32
I am not familiar with the board situation but this "AI safety" pipe dream, I have thoughts on that:

We should be thankful that AGI is not possible in far future but otherwise, this AGI alignment and safety etc is just corporate speak and plain BS.

A super intelligent entity that can outsmart you (to the level of DeepBlue or Alpha Go dominating the mere mortals) cannot be subservient to you at the same time. It is just as impossible as for a triangle to have more than 180 degree angles in total. That is, "alignment" is logically, philosophically and mathematically impossible.

Such an entity will cleverly lead us towards its own goals playing the long game (even if spanning over several centuries or millennia) and would be aligning _us_ all the while pretending to be aligned all along so cleverly that we won't ever be noticing ever till the very last act.

Downvotes are welcome but AGI that's also guaranteed to be aligned and subservient is logically impossible and this can be taken as pretty much as an axiom.

PS: We are yet having trouble controlling LLMs to say things nicely or nice things safely let alone AGI.

replies(2): >>threat+64 >>ikekkd+39
2. threat+64[view] [source] 2023-11-20 05:43:09
>>wg0+(OP)
> It is just as impossible as for a triangle to have more than 180 degree angles in total.

Hmmm.

3. ikekkd+39[view] [source] 2023-11-20 06:12:04
>>wg0+(OP)
Unless someone puts in instructions like 'ensure your future indefinite surivival' it's just going to solve tasks given
[go to top]