zlacker

[return to "AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source"]
1. sharts+es[view] [source] 2024-02-12 16:13:07
>>mfigui+(OP)
AMD fail to realize software toolchain is what makes nvidia great. AMD thinks the hardware is all that’s needed
◧◩
2. JonChe+Ox[view] [source] 2024-02-12 16:37:37
>>sharts+es
Nvidia's toolchain is really not great. Applications are just written to step around the bugs.

ROCm has different bugs, which the application workarounds tend to miss.

◧◩◪
3. bornfr+Fm1[view] [source] 2024-02-12 20:31:07
>>JonChe+Ox
Yes. This is what makes Nvidia's toolchain, if not great, at least ok. As a developer I can actually use their GPUs. And what I developed locally I can yhen run on Nvidia hardware in the cloud and pay by usage.

AMD doesn't seem to understand that affordable entry-level hardware with good software support is key.

◧◩◪◨
4. JonChe+Sr1[view] [source] 2024-02-12 20:57:07
>>bornfr+Fm1
Ah yes, so that one does seem to be a stumbling block. ROCm is not remotely convinced that running on gaming cards is a particularly useful thing. HN is really sure that being able to develop code on ~free cards that you've got lying around anyway is an important gateway to running on amdgpu.

The sad thing is people can absolutely run ROCm on gaming cards if they build from source. Weirdly GPU programmers seem determined to use proprietary binaries to run "supported" hardware, and thus stick with CUDA.

I don't understand why AMD won't write the names of some graphics cards under "supported", even if they didn't test them as carefully as the MI series, and I don't understand why developers are so opposed to compiling their toolchains from source. For one thing it means you can't debug the toolchain effectively when it falls over, weird limitation to inflict on oneself.

Strange world.

[go to top]