But LLMs don't do these things ... they just produce text that statistically matches patterns in the training data. Since the humans who authored the training data have personality patterns, the outputs of LLMs show these personality patterns. But LLMs do not internalize such patterns--they have no cognitive functions of their own.