zlacker

[return to "Releasing weights for FLUX.1 Krea"]
1. leftst+Eg1[view] [source] 2025-07-31 21:31:15
>>vmatsi+(OP)
How much data is the model trained on?
◧◩
2. dvrp+Yh1[view] [source] 2025-07-31 21:41:25
>>leftst+Eg1
Copying and pasting Sangwu’s answer:

We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.

[go to top]