zlacker

[parent] [thread] 3 comments
1. msp26+(OP)[view] [source] 2023-09-12 17:49:20
Do you still use few-shot prompting with a fine-tune? Or does it make little difference?
replies(2): >>kcorbi+91 >>selfho+q9
2. kcorbi+91[view] [source] 2023-09-12 17:55:23
>>msp26+(OP)
Nope, no need for few-shot prompting in most cases once you've fine-tuned on your dataset, so you can save those tokens and get cheaper/faster responses!
replies(1): >>selfho+Kb
3. selfho+q9[view] [source] 2023-09-12 18:31:34
>>msp26+(OP)
In my experience, there is little need to do that. With completely unambiguous instructions that describe the exact output format, you can often get away with no examples whatsoever. Single examples might be helpful, but multi-shot prompting will be definitely unneeded (and may even harm the model's output quality).
◧◩
4. selfho+Kb[view] [source] [discussion] 2023-09-12 18:37:16
>>kcorbi+91
Not only that, but in a lot of cases you won't have to fine-tune at all if an existing instruct model does a good enough job with unambiguous enough instructions.
[go to top]