zlacker
[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
◧
1. carom+JE1
[view]
[source]
2023-09-13 00:17:57
>>kcorbi+(OP)
What are your thoughts on fine tuning vs low rank adaptations?
◧◩
2. brucet+jB4
[view]
[source]
2023-09-13 21:15:03
>>carom+JE1
Llama LORAs are very customizable, and range from "almost full finetuning" to "barely taking more VRAM than GPTQ inference"
[go to top]