P.S. What I said is not "paradoxical". An LLM does not take on the attributes of its training data, any more than a computer screen displaying the pages of books becomes an author. Regardless of what is in the training data, the LLM continues to be the same statistical engine. The notion that an LLM can take on human characteristics is a category mistake, like thinking that there are people inside your TV set. The TV set is not, for instance, a criminal, even if it is tuned to crime shows 24/7. And an LLM does not have a tendency to protect its ego, even if everyone who contributed to the training data does ... the LLM doesn't have an ego. Those are characteristics of its output, not of the LLM itself, and there's a huge difference between the two. Too many people seem to think that, if for instance, they insult the LLM, it feels offended, just because it says it does. But that's entirely an illusion.