zlacker

[return to "Memory and new controls for ChatGPT"]
1. cl42+g4[view] [source] 2024-02-13 18:31:35
>>Josely+(OP)
I love this idea and it leads me to a question for everyone here.

I've done a bunch of user interviews of ChatGPT, Pi, Gemini, etc. users and find there are two common usage patterns:

1. "Transactional" where every chat is a separate question, sort of like a Google search... People don't expect memory or any continuity between chats.

2. "Relationship-driven" where people chat with the LLM as if it's a friend or colleague. In this case, memory is critical.

I'm quite excited to see how OpenAI (and others) blend usage features between #1 and #2, as in many ways, these can require different user flows.

So HN -- how do you use these bots? And how does memory resonate, as a result?

◧◩
2. Crespy+k5[view] [source] 2024-02-13 18:36:31
>>cl42+g4
Personally, I always expect every "conversation" to be starting from a blank slate, and I'm not sure I'd want it any other way unless I can self-host the whole thing.

Starting clean also has the benefit of knowing the prompt/history is in a clean/"known-good" state, and that there's nothing in the memory that's going to cause the LLM to get weird on me.

◧◩◪
3. mark_l+tb[view] [source] 2024-02-13 19:08:41
>>Crespy+k5
I have thought of implementing something like you are describing using local LLMs. Chunk the text of all conversations, use an embeddings data store for search, and for each new conversation calculate an embedding for the new prompt, add context text from previous conversations. This would be maybe 100 lines of Python, if that. Really, a RAG application, storing as chunks previous conversations.
[go to top]