I've done a bunch of user interviews of ChatGPT, Pi, Gemini, etc. users and find there are two common usage patterns:
1. "Transactional" where every chat is a separate question, sort of like a Google search... People don't expect memory or any continuity between chats.
2. "Relationship-driven" where people chat with the LLM as if it's a friend or colleague. In this case, memory is critical.
I'm quite excited to see how OpenAI (and others) blend usage features between #1 and #2, as in many ways, these can require different user flows.
So HN -- how do you use these bots? And how does memory resonate, as a result?
Starting clean also has the benefit of knowing the prompt/history is in a clean/"known-good" state, and that there's nothing in the memory that's going to cause the LLM to get weird on me.
This matters a lot for prompt injection/hijacking. Not that I'm clamoring to give OpenAI access to my personal files or APIs in the first place, but I'm definitely not interested in giving a version of GPT with more persistent memory access to those files or APIs. A clean slate is a mitigating feature that helps with a real security risk. It's not enough of a mitigating feature, but it helps a bit.
Oddly, the spoken version of ChatGPT4 does implore, listens and responds to tones, gives the same energy back and does ask questions. Sometimes it accidentally sounds sarcastic “is this one of your interests?”
For those cases there are quite a few things that I'd like it to memorize, like programming library preferences ("When working with dates prefer `date-fns` over `moment.js`") or code style preferences ("When writing a React component, prefer function components over class components"). Currently I feed in those preferences via the custom instructions feature, but I rarely take some time to update them, so the memory future is a welcome addition here.
I find that I ask a mix of one of questions and questions that require a lot of refinement, and the latter get buried among the former when i try and find them again, so i end up re explaining myself in new chats.
Some memory might actually be helpful. For example having it know that I have a Mac will give me Mac specific answers to command line questions without me having to add "for the Mac" to my prompt. Or having it know that I prefer python it will give coding answers in Python.
But in all those cases it takes me just a few characters to express that context with each request, and to be honest, I'll probably do it anyway even with memory, because it's habit at this point.
I think I am, and perhaps most people are, firmly transactional. And I think, in the interests of perusing "stickiness" unique to OpenAI, they are attempting to add relationship-driven/sticky bells and whistles, even though those pull the user interface as a whole toward a set of assumptions about usage that don't apply to me.
> You can turn off memory at any time (Settings > Personalization > Memory). While memory is off, you won't create or use memories.
I would love if I could have isolated memory windows where it would remember what I am working on but only if the chat was in a 'folder' with the other chats.
I don't want it to blend ideas across my entire account but just a select few.