zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. averev+Lo[view] [source] 2023-12-20 22:13:37
>>tosh+(OP)
not a bad model, becomes incoherent at above 8k token, and it's not helped by the fact that's very verbose, but seems very coherent and stay on topic closely until then: https://chat.openai.com/share/089d1b8c-3467-4c01-af9f-6568c0...

fails at math of course, even if the problem is very easy, like all mistrals. good for genration, probably not the best for RAG, there's mistral tunes that stay coherent to 16k tokens, and that cuts down chunking significanty

◧◩
2. Muffin+cR[view] [source] 2023-12-21 01:42:08
>>averev+Lo
> fails at math of course

what did OpenAI do for the LLM to know "if given a math question, write Python for it, and run the code in order to get result" instead of trying to do the math itself?

◧◩◪
3. Me1000+oS[view] [source] 2023-12-21 01:57:23
>>Muffin+cR
It trained the model with a lot of data to write code instead (probably sandwiched between some special tokens like [run-python]. The LLM runner then takes the code, runs it in a sandbox, and feeds the output back into the prompt and lets GPT continue inferencing. But TL;DR: it trained the model to write code for math problems instead of trying to solve them itself.
[go to top]