zlacker

[parent] [thread] 3 comments
1. dragon+(OP)[view] [source] 2025-08-01 01:10:23
I think they are bfloat16, not FP16, but they are both 16bpw formats, so it doesn't make a size difference.
replies(2): >>Tokume+6o >>iyn+pB
2. Tokume+6o[view] [source] 2025-08-01 06:13:27
>>dragon+(OP)
pardon the ignorance but it's the first time I've heard of bfloat16.

i asked chat for an explanation and it said bfloat has a higher range (like fp32) but less precision.

what does that mean for image generation and why was bfloat chosen over fp?

replies(1): >>dragon+WF
3. iyn+pB[view] [source] 2025-08-01 08:38:44
>>dragon+(OP)
Wiki article on bfloat16 for reference, since it was new to me: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
◧◩
4. dragon+WF[view] [source] [discussion] 2025-08-01 09:25:03
>>Tokume+6o
My fuzzy understanding, and I'm not at all an expert on this, that the main benefit is that bf16 is less prone to overflow/underflow during calculation, which is a source of bigger problems in both training and inference than the simple loss of precision, so once it became widely supported, it became a commonly-preferred format for models (whether image gen or otherwise) over FP16.
[go to top]