We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.
AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.