zlacker
[parent]
[thread]
1 comments
1. lyjack+(OP)
[view]
[source]
2023-12-21 01:54:39
It does. LLMs are most efficient when running large batches, so the gpu cost is super high if you’re underutilizing it. It will cost more than a cloud provider like open ai who has the volume to keep their GPUs saturated
replies(1):
>>jay-ba+ye
◧
2. jay-ba+ye
[view]
[source]
2023-12-21 04:35:54
>>lyjack+(OP)
Yup. It’s also important to mention that OpenAI enjoys the luxury of having large clusters of H100s (the last time I checked).
[go to top]