>>Josely+(OP)
This seems like a really useful (and obvious) feature, but I wonder if this could lead to a kind of "AI filter bubble": What if one of its memories is "this user doesn't like to be argued with; just confirm whatever they suggest"?
>>lxgr+A9
Memories are stored as distinct blobs of text. You could probably have an offline LLM that scans each of these memories one by one (or in chunks) and determine whether it could create such issues, and then delete them in a targeted way.