>>nickth+Cb
Anecdotally, I finetuned Mistral 7B for a specific (and slightly unusual) natural language processing task just a few days ago. GPT-4 can do the task, but it needs a long complex prompt and only gets it right about 80-90% of the time - the finetuned model performs significantly better with fewer tokens. (In fact it does so well that I suspect I could get good results with an even smaller model.)