zlacker

[parent] [thread] 0 comments
1. throwa+(OP)[view] [source] 2023-12-28 13:52:55
We use LLMs for classification. When you have limited data, LLMs work better than standard classification models like random forests. In some cases, we found LLM generated labels to be more accurate than humans.

Labeling few samples, LoRA optimizing an LLM, generating labels on millions of samples and then training a standard classifier is an easy way to get a good classifier in matter of hours/days.

Basically any task where you can handle some inaccuracy, LLMs can be a great tool. So I don't think LLMs are a fad as such.

[go to top]