zlacker

[parent] [thread] 18 comments
1. cl42+(OP)[view] [source] 2024-02-13 18:31:35
I love this idea and it leads me to a question for everyone here.

I've done a bunch of user interviews of ChatGPT, Pi, Gemini, etc. users and find there are two common usage patterns:

1. "Transactional" where every chat is a separate question, sort of like a Google search... People don't expect memory or any continuity between chats.

2. "Relationship-driven" where people chat with the LLM as if it's a friend or colleague. In this case, memory is critical.

I'm quite excited to see how OpenAI (and others) blend usage features between #1 and #2, as in many ways, these can require different user flows.

So HN -- how do you use these bots? And how does memory resonate, as a result?

replies(9): >>Crespy+41 >>yieldc+74 >>kiney+c5 >>hobofa+s5 >>kraftm+x6 >>jedber+T6 >>glenst+af >>snoman+bf >>Jpgrew+zIh
2. Crespy+41[view] [source] 2024-02-13 18:36:31
>>cl42+(OP)
Personally, I always expect every "conversation" to be starting from a blank slate, and I'm not sure I'd want it any other way unless I can self-host the whole thing.

Starting clean also has the benefit of knowing the prompt/history is in a clean/"known-good" state, and that there's nothing in the memory that's going to cause the LLM to get weird on me.

replies(4): >>danShu+B3 >>mark_l+d7 >>mhink+Wh >>madame+qk
◧◩
3. danShu+B3[view] [source] [discussion] 2024-02-13 18:48:59
>>Crespy+41
> Starting clean also has the benefit of knowing the prompt/history is in a clean/"known-good" state, and that there's nothing in the memory that's going to cause the LLM to get weird on me.

This matters a lot for prompt injection/hijacking. Not that I'm clamoring to give OpenAI access to my personal files or APIs in the first place, but I'm definitely not interested in giving a version of GPT with more persistent memory access to those files or APIs. A clean slate is a mitigating feature that helps with a real security risk. It's not enough of a mitigating feature, but it helps a bit.

4. yieldc+74[view] [source] 2024-02-13 18:51:09
>>cl42+(OP)
Speaking of transactional, the textual version of ChatGPT4 never asks questions or is having a conversation, its predicting what it thinks you need to know. One response, nothing unprompted.

Oddly, the spoken version of ChatGPT4 does implore, listens and responds to tones, gives the same energy back and does ask questions. Sometimes it accidentally sounds sarcastic “is this one of your interests?”

5. kiney+c5[view] [source] 2024-02-13 18:57:29
>>cl42+(OP)
I use it exclusively in the "transactional" style, often even opening a new chat for the same topic when chatgpt is going down the wrong road
6. hobofa+s5[view] [source] 2024-02-13 18:58:52
>>cl42+(OP)
My main usage of ChatGPT/Phind is for work-transactional things.

For those cases there are quite a few things that I'd like it to memorize, like programming library preferences ("When working with dates prefer `date-fns` over `moment.js`") or code style preferences ("When writing a React component, prefer function components over class components"). Currently I feed in those preferences via the custom instructions feature, but I rarely take some time to update them, so the memory future is a welcome addition here.

7. kraftm+x6[view] [source] 2024-02-13 19:04:50
>>cl42+(OP)
Personally i would like a kind of 2D Map of 'contexts' in which i can choose in space where to ask new questions. Each context would contain sub contexts. For example maybe I'm looking for career advice and I start out a chat with details of my job history, then im looking for a job and i paste in my cv, then im applying for a specific job and i paste in the job description. It would be nice to easily navigate to the career+cv+specific job description and start a new chat with 'whats missing from my cv that i should highlight for this job'.

I find that I ask a mix of one of questions and questions that require a lot of refinement, and the latter get buried among the former when i try and find them again, so i end up re explaining myself in new chats.

replies(2): >>polyga+37 >>singul+pt
8. jedber+T6[view] [source] 2024-02-13 19:06:59
>>cl42+(OP)
I use for transactional tasks. Mostly of the "I need a program/script/command line that does X".

Some memory might actually be helpful. For example having it know that I have a Mac will give me Mac specific answers to command line questions without me having to add "for the Mac" to my prompt. Or having it know that I prefer python it will give coding answers in Python.

But in all those cases it takes me just a few characters to express that context with each request, and to be honest, I'll probably do it anyway even with memory, because it's habit at this point.

replies(1): >>c2lsZW+ct
◧◩
9. polyga+37[view] [source] [discussion] 2024-02-13 19:07:46
>>kraftm+x6
I think it’s less of a 2D structure and more of a tree structure that you are describing. I’ve also felt the need of having “threads” with ChatGPT that I wish I could follow.
replies(1): >>kraftm+C8
◧◩
10. mark_l+d7[view] [source] [discussion] 2024-02-13 19:08:41
>>Crespy+41
I have thought of implementing something like you are describing using local LLMs. Chunk the text of all conversations, use an embeddings data store for search, and for each new conversation calculate an embedding for the new prompt, add context text from previous conversations. This would be maybe 100 lines of Python, if that. Really, a RAG application, storing as chunks previous conversations.
◧◩◪
11. kraftm+C8[view] [source] [discussion] 2024-02-13 19:15:19
>>polyga+37
Yeah thats probably a better way of putting it. Like a lot of times I find myself wanting to branch off of the same answer with different questions, and I worry that if I ask them all sequentially chatgpt will lose 'focus'.
replies(1): >>airstr+8q
12. glenst+af[view] [source] 2024-02-13 19:45:01
>>cl42+(OP)
I think this is an extremely helpful distinction, because it disentangles a couple of things I could not clearly disentangle in my own.

I think I am, and perhaps most people are, firmly transactional. And I think, in the interests of perusing "stickiness" unique to OpenAI, they are attempting to add relationship-driven/sticky bells and whistles, even though those pull the user interface as a whole toward a set of assumptions about usage that don't apply to me.

13. snoman+bf[view] [source] 2024-02-13 19:45:02
>>cl42+(OP)
For me it’s a combination of transactional and topical. By topical, I mean that I have a couple of persistent topics that I think on and work on (like writing an article on a topic), and I like to return to those conversations so that the context is there.
◧◩
14. mhink+Wh[view] [source] [discussion] 2024-02-13 19:59:46
>>Crespy+41
Looks like you'll be able to turn the feature off:

> You can turn off memory at any time (Settings > Personalization > Memory). While memory is off, you won't create or use memories.

◧◩
15. madame+qk[view] [source] [discussion] 2024-02-13 20:12:24
>>Crespy+41
Memory would be much more useful on a project or topic basis.

I would love if I could have isolated memory windows where it would remember what I am working on but only if the chat was in a 'folder' with the other chats.

I don't want it to blend ideas across my entire account but just a select few.

◧◩◪◨
16. airstr+8q[view] [source] [discussion] 2024-02-13 20:46:44
>>kraftm+C8
you can go back and edit an answer, which then creates a separate "thread". clicking left / right on that edited answer will reload the subsequent replies that came from that specific version of the answer
◧◩
17. c2lsZW+ct[view] [source] [discussion] 2024-02-13 21:04:15
>>jedber+T6
For what you described the
◧◩
18. singul+pt[view] [source] [discussion] 2024-02-13 21:05:32
>>kraftm+x6
You can create your own custom gpts for different scenarios in no time
19. Jpgrew+zIh[view] [source] 2024-02-19 12:38:16
>>cl42+(OP)
Sometimes GPT-4 and I will arrive at a useful frame that I wish I could use as a starting point for other topics or tangents. I wish I could refer to a link to an earlier conversation as a starting point for a new conversation.
[go to top]