That was a common hack for the LLM context length problem, but now that context length is "solved" it could be more useful to align output a bit better.
Adding the entire file (or memory in this case) would take up too much of the context. So just query the DB and if there's a match add it to the prompt after the conversation started.
I could imagine that once there's too many, it would indeed make sense to classify them as a database, though: "Prefers cats over dogs" is probably not salient information in too many queries.
Periodically "compress" chat history into relevant context and keep that slice of history as part of the memory.
15 day message history could be condensed greatly and still produce great results.