zlacker

[parent] [thread] 3 comments
1. ddtayl+(OP)[view] [source] 2026-01-23 14:55:42
You are probably triggering their knowledge distillation checks.
replies(2): >>andrew+cv >>faeyan+GS
2. andrew+cv[view] [source] 2026-01-23 17:27:11
>>ddtayl+(OP)
This was my first thought as well
3. faeyan+GS[view] [source] 2026-01-23 19:21:28
>>ddtayl+(OP)
what would a knwoedge distillation prompt even look like, and how could I make sure I would not accidentally fall into this trap?
replies(1): >>ddtayl+M05
◧◩
4. ddtayl+M05[view] [source] [discussion] 2026-01-25 09:57:30
>>faeyan+GS
My guess is that something that looks like the "teacher and student" model. I know there were methods in the past to utilize the token distribution to "retrain" one model with another, kind of like an auto fine-tuning, but AFAIK those are for offline model usage since you need the token distribution. There do appear to be similar methods for online-only models?
[go to top]