I don’t think an LLM is ever going to be capable of feeling fear, boredom etc.
If we did it would probably have many of the handicaps we do.
To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.
One of the worst versions of AGI might be a system that simulates to us that it has an internal life, but in reality has no internal subjective experience of itself.