Sure it suffers from amnesia and cannot get basic things right sometimes, but one is a design limitation that can be overcome and the other a possible problem caused by training (we’re discovering that overt focus during training on adherence to prompt somewhat lobotomizes their general competence).
For the 2nd: We are completely 100% sure this cannot be solved. This is not a training issue. This is the case for any statistical machine. No amount of training can ever solve this.
If you had enough paper and ink and the patience to go through it all you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process.