zlacker

[return to "Implementing a GPU's programming model on a CPU"]
1. raphli+ED1[view] [source] 2023-10-14 14:30:07
>>luu+(OP)
In addition to ISPC, some of this is also done in software fallback implementations of GPU APIs. In the open source world we have SwiftShader and Lavapipe, and on Windows we have WARP[1].

It's sad to me that Larrabee didn't catch on, as that might have been a path to a good parallel computer, one that has efficient parallel throughput like a GPU, but also agility more like a CPU, so you don't need to batch things into huge dispatches and wait RPC-like latencies for them to complete. Apparently the main thing that sunk it was power consumption.

[1]: https://learn.microsoft.com/en-us/windows/win32/direct3darti...

◧◩
2. zozbot+Ij2[view] [source] 2023-10-14 19:14:34
>>raphli+ED1
The recent "AI chip" proposals (Tenstorrent, Esperanto Technologies etc.) seem to be quite similar to the old Larrabee design, except based on RISC-V as opposed to x86. So we might see that happen after all.
◧◩◪
3. raphli+fm2[view] [source] 2023-10-14 19:28:50
>>zozbot+Ij2
Yes, I've got my eye on those and am hopeful. Do you know of any meaty technical description of the programming model? All I've been able to find so far is fairly high level marketing material. At least for Tenstorrent Jim Keller has promised that the software stack will be open sourced, something I'm looking forward to.
[go to top]