zlacker

[parent] [thread] 3 comments
1. Arctic+(OP)[view] [source] 2023-09-12 19:07:44
I haven't had much trouble with GPT 3.5 or 4 function calls returning in an undesirable format recently. I did get a few bad syntax responses when OpenAI first rolled it out, but not for the past few months.

Llama 2 can also pick the function call format up, given sufficient training data that contains function call responses, though you'll then have to parse the returned object out of the text-based response.

replies(1): >>behnam+Us1
2. behnam+Us1[view] [source] 2023-09-13 03:33:59
>>Arctic+(OP)
Has anyone done such fine tuning on llama though? Afaik most projects like llama.cpp use grammars instead.
replies(1): >>Arctic+pA1
◧◩
3. Arctic+pA1[view] [source] [discussion] 2023-09-13 04:53:05
>>behnam+Us1
Yep! The linked notebook includes an example of exactly that (fine-tuning a 7b model to match the syntax of GPT-4 function call responses): https://github.com/OpenPipe/OpenPipe/blob/main/examples/clas...
replies(1): >>behnam+1M2
◧◩◪
4. behnam+1M2[view] [source] [discussion] 2023-09-13 14:03:28
>>Arctic+pA1
Thanks!
[go to top]