zlacker

[parent] [thread] 0 comments
1. mk_stj+(OP)[view] [source] 2023-08-01 11:03:56
4090s have 24GB of 384-bit-wide GDDR6 with no ability to interconnect that memory to other 4090s except thru PCIe bandwidth.

H100s have 80GB of 5120-bit HBM with SXM NVLink for 8-at-a-time in a rack.

HUGE difference in bandwidth when doing anything where the inferring the model needs to be spread over multiple GPUs, which all LLM's are. And even more of a difference when training is in play.

[go to top]