zlacker

[parent] [thread] 4 comments
1. addand+(OP)[view] [source] 2024-02-13 20:59:37
I purposely go out of my way to start new chats to have a clean slate and not have it remember things.
replies(2): >>jerpin+I5 >>merpnd+Z9
2. jerpin+I5[view] [source] 2024-02-13 21:30:41
>>addand+(OP)
Agreed, I do this all the time especially when the model hits a dead end
replies(1): >>hacker+Mb1
3. merpnd+Z9[view] [source] 2024-02-13 21:54:02
>>addand+(OP)
In a good RAG system this should be solved by unrelated text not being available in the context. It could actually improve your chats by quickly removing unrelated parts of the conversation.
◧◩
4. hacker+Mb1[view] [source] [discussion] 2024-02-14 07:13:39
>>jerpin+I5
I often run multiple parallel chats and expose it to slightly different amounts of information. Then average the answers in my head to come up with something more reliable.

For coding tasks, I found it helps to feed the GPT-4 answer into another GPT-4 instance and say "review this code step by step, identify any bugs" etc. It can sometimes find its own errors.

replies(1): >>jerpin+gQ1
◧◩◪
5. jerpin+gQ1[view] [source] [discussion] 2024-02-14 14:02:10
>>hacker+Mb1
I feel like you could probably generalize this method and attempt to get better performance with LLMs
[go to top]