I've done a bunch of user interviews of ChatGPT, Pi, Gemini, etc. users and find there are two common usage patterns:
1. "Transactional" where every chat is a separate question, sort of like a Google search... People don't expect memory or any continuity between chats.
2. "Relationship-driven" where people chat with the LLM as if it's a friend or colleague. In this case, memory is critical.
I'm quite excited to see how OpenAI (and others) blend usage features between #1 and #2, as in many ways, these can require different user flows.
So HN -- how do you use these bots? And how does memory resonate, as a result?
Some memory might actually be helpful. For example having it know that I have a Mac will give me Mac specific answers to command line questions without me having to add "for the Mac" to my prompt. Or having it know that I prefer python it will give coding answers in Python.
But in all those cases it takes me just a few characters to express that context with each request, and to be honest, I'll probably do it anyway even with memory, because it's habit at this point.