And if you bully it enough on something nonsensical it'll give you a wrong answer.
You press it, and it takes a guess even though you told it not to, and gets it right, then you go "see it knew!". There's no database hanging out in ChatGPT/Claude/Gemini's weights with a list of cities and the tallest buildings. There's a whole bunch of opaque stats derived from the content it's been trained on that means that most of the time it'll come up with the same guess. But there's no difference in process between that highly consistent response to you asking the tallest building in New York and the one where it hallucinates a Python method that doesn't exist, or suggests glue to keep the cheese on your pizza. It's all the same process to the LLM.