We are actually the people training the AI. It won't "derive values" itself for the case of terminal values (instrumental values are just subgoals, and some of them are convergent, like power seeking and not wanting to be turned off). Just like we didn't derive our terminal values ourselves, it was evolution, a mindless process. The difficulty is how to give the AI the right values.
>>cubefo+(OP)
What makes you so sure that evolution is a mindless process? No doubt you were told that in high school, but examine your priors. How do minds arise from mindlessness?