zlacker

[parent] [thread] 2 comments
1. jerpin+(OP)[view] [source] 2024-02-13 21:30:41
Agreed, I do this all the time especially when the model hits a dead end
replies(1): >>hacker+461
2. hacker+461[view] [source] 2024-02-14 07:13:39
>>jerpin+(OP)
I often run multiple parallel chats and expose it to slightly different amounts of information. Then average the answers in my head to come up with something more reliable.

For coding tasks, I found it helps to feed the GPT-4 answer into another GPT-4 instance and say "review this code step by step, identify any bugs" etc. It can sometimes find its own errors.

replies(1): >>jerpin+yK1
◧◩
3. jerpin+yK1[view] [source] [discussion] 2024-02-14 14:02:10
>>hacker+461
I feel like you could probably generalize this method and attempt to get better performance with LLMs
[go to top]