zlacker

[parent] [thread] 4 comments
1. ribs+(OP)[view] [source] 2023-03-05 08:04:09
I keep hearing that ROCm is DOA, but there’s a lot of supercomputing labs that are heavily investing in it, with engineers who are quite in favor of it.
replies(4): >>mschue+e2 >>pixele+u4 >>doikor+T6 >>pjmlp+Z7
2. mschue+e2[view] [source] 2023-03-05 08:40:22
>>ribs+(OP)
I hope it takes off, a platform independent alternative to CUDA would be great. But if they want it to be successfully outside of supercomputing labs, it needs to be as easy to use as CUDA. And I'd say being successfull outside of supercomputer labs is important for overall adoption and success. For me personally, it would also need fast runtime compilation so that you can modify and hot-reload ROCm programs at runtime.
3. pixele+u4[view] [source] 2023-03-05 09:15:34
>>ribs+(OP)
If you want to run compute on AMD GPU hardware on Linux, it does work - however it's not as portable as CUDA as you practically have to compile your code for every AMD GPU architecture, whereas with CUDA the nvidia drivers give you an abstraction layer (ish, it's really PTX which provides it, but...) which is forwards and backwards compatible, which makes it trivial to support new cards / generations of cards without recompiling anything.
4. doikor+T6[view] [source] 2023-03-05 09:54:16
>>ribs+(OP)
With supercomputers you write your code for that specific supercomputer. In such an environment ROCm works ok. Trying to make a piece of ROCm code work on different cards/setups is real pain (and not that easy with CUDA either if you want good performance)
5. pjmlp+Z7[view] [source] 2023-03-05 10:11:33
>>ribs+(OP)
Some random HPC lab with weight to have a AMD team drop by isn't the same thing as average joe and jane developer.
[go to top]