zlacker
[parent]
[thread]
1 comments
1. carom+(OP)
[view]
[source]
2023-09-13 00:17:57
What are your thoughts on fine tuning vs low rank adaptations?
replies(1):
>>brucet+AW2
◧
2. brucet+AW2
[view]
[source]
2023-09-13 21:15:03
>>carom+(OP)
Llama LORAs are very customizable, and range from "almost full finetuning" to "barely taking more VRAM than GPTQ inference"
[go to top]