zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. idosh+9x[view] [source] 2023-09-12 19:05:11
>>kcorbi+(OP)
Can you elaborate on your plans for OpenPipe? Sounds like a very interesting project
◧◩
2. Arctic+8C[view] [source] 2023-09-12 19:22:37
>>idosh+9x
Currently OpenPipe allows you to capture input/output from a powerful model and use it to fine-tune a much smaller one, then offers you the option to host through OpenPipe or download it and host it elsewhere. Models hosted on OpenPipe enjoy a few benefits, like data drift detection and automatic reformatting of output to match the original model you trained against (think extraction "function call" responses from a purely textual Llama 2 response) through the sdk.

Longer-term, we'd love to expand the selection of base models to include specialized LLMs that are particularly good at a certain task, e.g. language translation, and let you train off of those as well. Providing a ton of specialized starting models will decrease the amount of training data you need, and increase the number of tasks at which fine-tuned models can excel.

[go to top]