zlacker

[return to "AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source"]
1. lambda+6i[view] [source] 2024-02-12 15:32:07
>>mfigui+(OP)
It seems to me that AMD are crazy to stop funding this. CUDA-on-ROCm breaks NVIDIA's moat, and would also act as a disincentive for NVIDIA to make breaking changes to CUDA; what more could AMD want?

When you're #1, you can go all-in on your own proprietary stack, knowing that network effects will drive your market share higher and higher for you for free.

When you're #2, you need to follow de-facto standards and work on creating and following truly open ones, and try to compete on actual value, rather than rent-seeking. AMD of all companies should know this.

◧◩
2. saboot+Hn[view] [source] 2024-02-12 15:55:13
>>lambda+6i
Yep, I develop several applications that use CUDA. I see AMD/Radeon powered computers for sale and want to buy one, but I am not going to risk not being able to run those applications or having to rewrite them.

If they want me as a customer, and they have not created a viable alternative to CUDA, they need to pursue this.

◧◩◪
3. weebul+AC1[view] [source] 2024-02-12 21:53:19
>>saboot+Hn
Define "viable"?
◧◩◪◨
4. crouto+Sg2[view] [source] 2024-02-13 02:23:55
>>weebul+AC1
A backend that runs PyTorch out of the box and is as easy to setup / use as nvidia stack.
◧◩◪◨⬒
5. weebul+sV7[view] [source] 2024-02-14 21:02:29
>>crouto+Sg2
Installing PyTorch with the PyTorch website instructions for AMD was pretty painless for me on Linux. I know everybodies experience is different, but install wasn't the issue for me.

For me the issue on AMD was stability in situations when VRAM was getting tight.

[go to top]