zlacker

[parent] [thread] 0 comments
1. TOMDM+(OP)[view] [source] 2023-12-20 21:05:45
Yeah, a 7B foundation model is of course going to be worse when expected to perform on every task.

But finetuning on just a few tasks?

Depending on the task, it's totally reasonable to expect that a 7B model might eke out a win against stock GPT4. Especially if there's domain knowledge in the finetune, and the given task is light on demand for logical skills.

[go to top]