My only guess is they have a parallel skunkworks working on the same thing, but in a way that they can keep it closed-source - that this was a hedge they think they no longer need, and they are missing the forest for the trees on the benefits of cross-pollination and open source ethos to their business.
Meanwhile CUDA supports anything with Nvidia stamped on it before it's even released. They'll even go as far as doing things like adding support for new GPUs/compute families to older CUDA versions (see Hopper/Ada and CUDA 11.8).
You can go out and buy any Nvidia GPU the day of release, take it home, plug it in, and everything just works. This is what people expect.
AMD seems to have no clue that this level of usability is what it will take to actually compete with Nvidia and it's a real shame - their hardware is great.
It's annoying as hell to you and me that they are not catering to the market of people who want to run stuff on their gaming cards.
But it's not clear it's bad strategy to focus on executing in the high-end first. They have been very successful landing MI300s in the HPC space...
Edit: I just looked it up: 25% of the GPU Compute in the current Top500 Supercomputers is AMD
https://www.top500.org/statistics/list/
Even though the list has plenty of V100 and A100s which came out (much) earlier. Don't have the data at hand, but I wouldn't be surprised if AMD got more of the Top500 new installations than nVidia in the last two years.
H100's are hard to get. Nearly impossible. CoreWeave and others have scooped them all up for the foreseeable future. So, if you are looking at only price as the factor, then it becomes somewhat irrelevant, if you can't even buy them [0]. I don't really understand the focus on price because of this fact.
Even if you do manage to score yourself some H100's. You also need to factor in the networking between nodes. IB (Infiniband) made by Mellanox, is owned by NVIDIA. Lead times on that equipment are 50+ weeks. Again, price becomes irrelevant if you can't even network your boxes together.
As someone building a business around MI300x (and future products), I don't care that much about price [!]. We know going in that this is a super capital intensive business and have secured the backing to support that. It is one of those things where "if you have to ask, you can't afford it."
We buy cards by the chassis, it is one price. I actually don't know the exact prices of the cards (but I can infer it). It is a lot about who you know and what you're doing. You buy more chassis, you get better pricing. Azure is probably paying half of what I'm paying [1]. But I'd also say that from what I've seen so far, their chassis aren't nearly as nice as mine. I have dual 9754's, 2x bonded 400G, 3TB ram, and 122TB nvme... plus the 8x MI300x. These are top of the top. They have Intel and I don't know what else inside.
[!] Before you harp on me, of course I care about price... but at the end of the day, it isn't what I'm focused on today as much as just being focused on investing all of the capex/opex that I can get my hands on, into building a sustainable business that provides as much value as possible to our customers.
[0] https://www.tomshardware.com/news/tsmc-shortage-of-nvidias-a...
[1] https://www.techradar.com/pro/instincts-are-massively-cheape...