zlacker

[parent] [thread] 2 comments
1. leftst+(OP)[view] [source] 2025-07-31 21:31:15
How much data is the model trained on?
replies(1): >>dvrp+k1
2. dvrp+k1[view] [source] 2025-07-31 21:41:25
>>leftst+(OP)
Copying and pasting Sangwu’s answer:

We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.

replies(1): >>lawles+Uj
◧◩
3. lawles+Uj[view] [source] [discussion] 2025-08-01 00:12:18
>>dvrp+k1
How is data acquired and curated?
[go to top]