zlacker

[return to "Releasing weights for FLUX.1 Krea"]
1. bangal+y91[view] [source] 2025-07-31 20:52:12
>>vmatsi+(OP)
Can someone ELI5 why the safetensor file is 23.8 GB, given the 12B parameter model? Does the model use closer to 24 GB of VRAM or 12 GB of VRAM. I've always associated a 1 billion parameter = 1 GB of VRAM. Is this estimate inaccurate?
◧◩
2. peterc+Y91[view] [source] 2025-07-31 20:54:10
>>bangal+y91
That's a good ballpark for something quantized to 8 bits per parameter. But you can 2x/4x that for 16 and 32 bit.
◧◩◪
3. 773412+eb1[view] [source] 2025-07-31 21:02:09
>>peterc+Y91
I've never seen a 32 bit model. There's bound to be a few of them, but it's hardly a normal precision.
◧◩◪◨
4. zamada+4f1[view] [source] 2025-07-31 21:21:09
>>773412+eb1
Some of the most famous models were distributed as F32, e.g. GPT-2. As things have shifted more towards mass consumption of model weights it's become less and less common to see.
◧◩◪◨⬒
5. nodja+Io1[view] [source] 2025-07-31 22:35:10
>>zamada+4f1
> As things have shifted more towards mass consumption of model weights it's become less and less common to see.

Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).

Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.

[go to top]