zlacker

[return to "AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source"]
1. lambda+6i[view] [source] 2024-02-12 15:32:07
>>mfigui+(OP)
It seems to me that AMD are crazy to stop funding this. CUDA-on-ROCm breaks NVIDIA's moat, and would also act as a disincentive for NVIDIA to make breaking changes to CUDA; what more could AMD want?

When you're #1, you can go all-in on your own proprietary stack, knowing that network effects will drive your market share higher and higher for you for free.

When you're #2, you need to follow de-facto standards and work on creating and following truly open ones, and try to compete on actual value, rather than rent-seeking. AMD of all companies should know this.

◧◩
2. RamRod+dl[view] [source] 2024-02-12 15:44:37
>>lambda+6i
> and would also act as a disincentive for NVIDIA to make breaking changes to CUDA

I don't know about that. You could kinda argue the opposite. "We improved CUDA. Oh it stopped working for you on AMD hardware? Too bad. Buy Nvidia next time"

◧◩◪
3. freeon+lw[view] [source] 2024-02-12 16:30:28
>>RamRod+dl
Most CUDA applications do not target the newest CUDA version! Despite 12.1 being out, lots of code still targets 7 or 8 to support old NVIDIA cards. Similar support for AMD isn’t unthinkable (but a rewrite to rocm would be).
◧◩◪◨
4. lambda+kO1[view] [source] 2024-02-12 23:03:56
>>freeon+lw
That's exactly the point I was making above.
[go to top]