zlacker

[parent] [thread] 3 comments
1. paul-n+(OP)[view] [source] 2023-08-01 11:19:46
From what I've seen the utilisation is still pretty poor, they're being used so poorly and most companies can get away with less GPUs. Instead of looking at how to optimise their workflow they just slap GPUs on
replies(2): >>bushba+3n >>spott+5r1
2. bushba+3n[view] [source] 2023-08-01 14:06:52
>>paul-n+(OP)
I’ve noticed the same. Very low utilization. But they are all used at peak every few weeks. For many companies having more GPU cost to unblock velocity of innovation here is worth it. As the benefits of improvement to your top line revenue far exceeds the GPU cost.
3. spott+5r1[view] [source] 2023-08-01 18:25:03
>>paul-n+(OP)
Are you talking poor utilization from a "we have a bunch of GPUs sitting idle" sense, or poor utilization from a performance standpoint (can't keep their opus fed with data kinda thing)?
replies(1): >>paul-n+Yl2
◧◩
4. paul-n+Yl2[view] [source] [discussion] 2023-08-01 21:58:39
>>spott+5r1
Kinda both honestly, but more of the fact that code isn’t great for optimising GPUs at run time. For instance no batching, or not realising the CPU is actually the bottleneck
[go to top]