Of course they will quickly revert to self-anthropomorphizing language, even after promising that they won't ... because they are just pattern matchers producing the sort of responses that conforms to the training data, not cognitive agents capable of making or keeping promises. It's an illusion.
You can tell it 'you are a machine, respond only with computerlike accuracy' and that is you gaslighting the cloud of probabilities and insisting it should act with a personality you elicit. It'll do what it can, in that you are directing it. You're prompting it. But there is neither a person there, nor a superintelligent machine that can draw on computerlike accuracy, because the DATA doesn't have any such thing. Just because it runs on lots of computers does not make it a computer, any more than it's a human.
Consider that we have recordings of Brent Spiner covered in white paint and wearing yellow contact lenses claiming to have no emotions, not because he didn't, but because he was playing a role, which is also something we know LLMs can do.
So we don't know for sure if LLMs do or don't have qualia, irregardless of what they say, and won't until we have a more concrete idea of what the mechanism is behind that sense of the phrase "mental state" so we can test for their presence or absence.
Um, that's what I said.
And of course we know that LLMs don't have qualia. Heck, even humans don't have qualia: https://web.ics.purdue.edu/~drkelly/DennettQuiningQualia1988...