Treating computers like humans in this one particular way is very appropriate. It is the only way that LLM can synthesize a worldview when their training data is many thousands of times larger than their number of parameters. Imagine scaling up the total data by another factor of 1million in a few years. There is no current technology to store that info but we can easily train large neural nets that can recreate the essence of it, just like we traditionally trained humans to recall ideas.