zlacker

[return to "AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source"]
1. codedo+pF1[view] [source] 2024-02-12 22:09:57
>>mfigui+(OP)
As I understand, Vulkan allows to run custom code on GPU, including the code to multiply matrices. Can one simply use Vulkan and ignore CUDA, PyTorch and ROCm?
◧◩
2. Peteri+MN2[view] [source] 2024-02-13 08:05:10
>>codedo+pF1
You probably can, but why would you? The main (only?) reason to ignore the CUDA-based stack is so that you could save a bit of money by using some other hardware instead of nVidia. So the amount of engineering labor/costs you should be willing to accept is directly tied to how much hardware you intend to buy or rent and what % discount, if any, the alternative hardware enables compared to nVidia.

So if you'd want to ignore CUDA+PyTorch and reimplement all of what you need on top of Vulkan.... well, that becomes worthy of discussion only if you expect to spend a lot on hardware, if you really consider that savings on hardware can recoup many engineer-years of costs - otherwise it's more effective to just go with the flow.

[go to top]