zlacker
[return to "AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source"]
◧
1. codedo+pF1
[view]
[source]
2024-02-12 22:09:57
>>mfigui+(OP)
As I understand, Vulkan allows to run custom code on GPU, including the code to multiply matrices. Can one simply use Vulkan and ignore CUDA, PyTorch and ROCm?
◧◩
2. 0xDEAD+Rw2
[view]
[source]
2024-02-13 04:47:17
>>codedo+pF1
there's a pretty cool Vulkan LLM engine here for example:
https://github.com/mlc-ai/mlc-llm
[go to top]