When we talk about developing models in alignment with goals to herd or control super intelligence since humans will not be able to do it, we are necessarily talking about designing something capable of governing and controlling very complex systems. What happens when a "misaligned system" happens to be a rebellious human being? In trying to find ways to control potential superintelligence and keep it on a positive path, is it possible we are building the very system that enslaves us?
Maybe this is one of those problems.