zlacker

[parent] [thread] 0 comments
1. adamsm+(OP)[view] [source] 2023-03-01 20:56:43
Look into something called Instrumental Convergence. The TLDR is that basically any advanced AI system with some set of high level goals is going to converge on a set of sub goals (self preservation, adding more compute, improving it's own design, etc.) that all lead to bad things for humanity. I.e paperclip maximizers might realize that Humans getting in the way of it's paperclip maximizing is a problem so it decides to neutralize them. In order to do so it needs to improve it's capabilities so works towards gathering more compute and improving it's own design. A Financial Trading AI realizes that it can generate more profit if it can gather more compute and improve it's design. An Asteroid Mining AI realizes it can build more probes if it had more compute to control more factories so it sets about gathering more compute and improving it's own design. Eliminating humans who may shut the AI off is often such a sub goal.
[go to top]