To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.
One of the worst versions of AGI might be a system that simulates to us that it has an internal life, but in reality has no internal subjective experience of itself.