Are fine-tuning datasets required to be input/output pairs? Or instead, can the fine-tuning be autoregressive (predict the next token throughout this corpus of unlabeled documents)?
As a practical matter though, most of the fine-tuning frameworks, including Axolotl (which this guide uses) and HuggingFace's SFTTrainer (the actual fine-tuning trainer most frameworks use under the hood) assume your data comes in input/output pairs, and automatically inserts a separator token to let the model know that the input has finished and it should start generating the output. In general most tasks can be formulated this way, including autocomplete tasks, so I'd probably recommend going that way unless you have a very strong reason not to.
For autocomplete tasks, with a corpus of unlabeled documents, would you insert a separator token at an arbitrary space in each document, in order to form input/output pairs?