zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. Maschi+es[view] [source] 2023-09-12 18:50:01
>>kcorbi+(OP)
What makes sense to fine-tune and what not?

You said 50-1000 examples.

Do I fine-tune when having specific q/a sets like from real customers and I want to add the right answer to the model?

Do I fine-tune facts or should I use some lookup?

Does adding some code and API docs for a current version of something I want more support make sense? Like chatgpt knows quarkus 2 but not quarkus 3

◧◩
2. Arctic+Qv[view] [source] 2023-09-12 19:01:01
>>Maschi+es
Generally speaking, fine-tuning a small model makes sense when the task that you want it to carry out is well-defined and doesn't vary too much from one prompt to another. Fine-tuning facts into a model doesn't seem to scale super well, but general textual style, output format, and evaluation criteria for example can all be instilled through the fine-tuning process. I would use lookup if you need your answers to include a wide array of information that the model you're basing off of wasn't initially trained on.
[go to top]