zlacker

[parent] [thread] 5 comments
1. binary+(OP)[view] [source] 2023-09-12 18:53:38
This looks awesome! Tangential question - do you find GPT function calling to work consistently and without error, or do you get errors when using it? By errors I mostly mean incorrect function signatures/types or missing values...but if you see other unpredictable behavior that would help too.
replies(2): >>Arctic+l4 >>llwj+4R
2. Arctic+l4[view] [source] 2023-09-12 19:07:44
>>binary+(OP)
I haven't had much trouble with GPT 3.5 or 4 function calls returning in an undesirable format recently. I did get a few bad syntax responses when OpenAI first rolled it out, but not for the past few months.

Llama 2 can also pick the function call format up, given sufficient training data that contains function call responses, though you'll then have to parse the returned object out of the text-based response.

replies(1): >>behnam+fx1
3. llwj+4R[view] [source] 2023-09-12 22:10:37
>>binary+(OP)
I see wrong responses about 1% of the time, but I love it, considering parsing raw text output without function calling had a much higher error rate.
◧◩
4. behnam+fx1[view] [source] [discussion] 2023-09-13 03:33:59
>>Arctic+l4
Has anyone done such fine tuning on llama though? Afaik most projects like llama.cpp use grammars instead.
replies(1): >>Arctic+KE1
◧◩◪
5. Arctic+KE1[view] [source] [discussion] 2023-09-13 04:53:05
>>behnam+fx1
Yep! The linked notebook includes an example of exactly that (fine-tuning a 7b model to match the syntax of GPT-4 function call responses): https://github.com/OpenPipe/OpenPipe/blob/main/examples/clas...
replies(1): >>behnam+mQ2
◧◩◪◨
6. behnam+mQ2[view] [source] [discussion] 2023-09-13 14:03:28
>>Arctic+KE1
Thanks!
[go to top]