zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. YetAno+wB[view] [source] 2023-12-20 23:40:14
>>tosh+(OP)
One thing that most people don't realize is that (full parameter)finetuned models are costly unless you run it in batched mode. Which means unless the request rate is very high and consistent, it is better to use prompts with GPT-3.5. e.g. batch of 1, mistral is more expensive than GPT-4[1].

[1]: https://docs.mystic.ai/docs/mistral-ai-7b-vllm-fast-inferenc...

◧◩
2. MacsHe+5p1[view] [source] 2023-12-21 08:26:24
>>YetAno+wB
I cloud host Mistral 7B for 20x cheaper than GPT-4-Turbo.

And Mistral 7B API is $0.00/1M tokens, i.e. free : https://openrouter.ai/models/mistralai/mistral-7b-instruct

[go to top]