Nope, no need for few-shot prompting in most cases once you've fine-tuned on your dataset, so you can save those tokens and get cheaper/faster responses!
>>kcorbi+(OP)
Not only that, but in a lot of cases you won't have to fine-tune at all if an existing instruct model does a good enough job with unambiguous enough instructions.