zlacker

[parent] [thread] 0 comments
1. westur+(OP)[view] [source] 2023-12-10 02:50:52
From "PyTorch for WebGPU" (2023) >>36009478 :

> Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h

[go to top]