zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. rrherr+ez[view] [source] 2023-09-12 19:12:53
>>kcorbi+(OP)
"You do this by training an existing model on example input/output pairs that demonstrate the task you want your fine-tuned model to learn."

Are fine-tuning datasets required to be input/output pairs? Or instead, can the fine-tuning be autoregressive (predict the next token throughout this corpus of unlabeled documents)?

◧◩
2. omneit+mF7[view] [source] 2023-09-14 19:24:30
>>rrherr+ez
What you described is basically an input/output pair. The input is the sentence so far, and the output is the next token. You build your dataset by splitting the raw text corpus into sentences, paragraphs or documents, and for each of these chunks generate input/target pairs by taking the sentence up to the Nth token as input and that token as output. You do this for each token in your corpus' chunks.

For further reference you can lookup "next-token prediction objective".

[go to top]