zlacker
[parent]
[thread]
0 comments
1. brucet+(OP)
[view]
[source]
2023-09-13 21:15:03
Llama LORAs are very customizable, and range from "almost full finetuning" to "barely taking more VRAM than GPTQ inference"
[go to top]