zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. msp26+Tb[view] [source] 2023-09-12 17:49:20
>>kcorbi+(OP)
Do you still use few-shot prompting with a fine-tune? Or does it make little difference?
◧◩
2. kcorbi+2d[view] [source] 2023-09-12 17:55:23
>>msp26+Tb
Nope, no need for few-shot prompting in most cases once you've fine-tuned on your dataset, so you can save those tokens and get cheaper/faster responses!
[go to top]