My only guess is they have a parallel skunkworks working on the same thing, but in a way that they can keep it closed-source - that this was a hedge they think they no longer need, and they are missing the forest for the trees on the benefits of cross-pollination and open source ethos to their business.
I'm curious about this. Sure some CUDA code has already been written. If something new comes along that provides better performance per dollar spent, why continue writing CUDA for new projects? I don't think the argument that "this is what we know how to write" works in this case. These aren't scripts you want someone to knock out quickly.
They won’t be able to do that, their hardware isn’t fast enough.
Nvidia is beating them at hardware performance, AND ALSO has an exclusive SDK (CUDA) that is used by almost all deep learning projects. If AMD can get their cards to run CUDA via ROCm, then they can begin to compete with Nvidia on price (though not performance). Then, and only then, if they can start actually producing cards with equivalent performance (also a big stretch) they can try for an Embrace Extend Extinguish play against CUDA.
Well, then I guess CUDA is not really the problem, so being able to run CUDA on AMD hardware wouldn't solve anything.
> try for an Embrace Extend Extinguish play against CUDA
They wouldn't need to go that route. They just need a way to run existing CUDA code on AMD hardware. Once that happens, their customers have the option to save money by writing ROCm or whatever AMD is working on at that time.