zlacker

[parent] [thread] 2 comments
1. kcorbi+(OP)[view] [source] 2023-09-12 19:31:48
There's no rule that your fine-tuning dataset needs to be split into input/output pairs -- you can of course fine-tune a model to just continue a sequence.

As a practical matter though, most of the fine-tuning frameworks, including Axolotl (which this guide uses) and HuggingFace's SFTTrainer (the actual fine-tuning trainer most frameworks use under the hood) assume your data comes in input/output pairs, and automatically inserts a separator token to let the model know that the input has finished and it should start generating the output. In general most tasks can be formulated this way, including autocomplete tasks, so I'd probably recommend going that way unless you have a very strong reason not to.

replies(2): >>rrherr+pG >>omneit+V07
2. rrherr+pG[view] [source] 2023-09-12 22:15:39
>>kcorbi+(OP)
“most tasks can be formulated this way, including autocomplete tasks”

For autocomplete tasks, with a corpus of unlabeled documents, would you insert a separator token at an arbitrary space in each document, in order to form input/output pairs?

3. omneit+V07[view] [source] 2023-09-14 19:27:28
>>kcorbi+(OP)
Axolotl takes a lot of formats, not all of them are in the form of input/output.

"Completion" format only takes a single text value per dataset record. Some other formats are in the form of multiple choice answers, etc.

Take a look below (there are more formats in "see other formats") https://github.com/OpenAccess-AI-Collective/axolotl#dataset

[go to top]