zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. binary+Dt[view] [source] 2023-09-12 18:53:38
>>kcorbi+(OP)
This looks awesome! Tangential question - do you find GPT function calling to work consistently and without error, or do you get errors when using it? By errors I mostly mean incorrect function signatures/types or missing values...but if you see other unpredictable behavior that would help too.
◧◩
2. Arctic+Yx[view] [source] 2023-09-12 19:07:44
>>binary+Dt
I haven't had much trouble with GPT 3.5 or 4 function calls returning in an undesirable format recently. I did get a few bad syntax responses when OpenAI first rolled it out, but not for the past few months.

Llama 2 can also pick the function call format up, given sufficient training data that contains function call responses, though you'll then have to parse the returned object out of the text-based response.

◧◩◪
3. behnam+S02[view] [source] 2023-09-13 03:33:59
>>Arctic+Yx
Has anyone done such fine tuning on llama though? Afaik most projects like llama.cpp use grammars instead.
◧◩◪◨
4. Arctic+n82[view] [source] 2023-09-13 04:53:05
>>behnam+S02
Yep! The linked notebook includes an example of exactly that (fine-tuning a 7b model to match the syntax of GPT-4 function call responses): https://github.com/OpenPipe/OpenPipe/blob/main/examples/clas...
[go to top]