I never had the tenacity to consider my build "finished," and definitely didn't have your budget, but I built a 5-player room[1] for DotA 2 back in 2013.
I got really lucky with hardware selection and ended up fighting with various bugs over the years... diagnosing a broken video card was an exercise in frustration because the virtualization layer made BSODs impossible to see.
I went with local disk-per-VM because latency matters more than throughput, and I'd been doing iSCSI boot for such a long time that I was intimately familiar with the downsides.
I love your setup (thanks for taking the time to share this BTW) and would love to know if you ever get the local CoW working.
My only tech-related comment is that I will also confirm that those 10G cards are indeed trash, and would humbly suggest an Intel-based eBay special. You could still load iPXE (I assume you're using it) from the onboard NIC, continue using it for WoL, but shift the netboot over to the add-in card via a script, and probably get better stability and performance.
Yeah I'm pretty sure my onboard 10G Marvell AQtion ethernet is the source of most of my stability woes. About half the time any of these machines boot up, Windows bluescreens within the first couple minutes, and I think it has something to do with the iSCSI service crashing. Never had trouble in the old house where the machines had 1G network -- but load times were painful.
Luckily if the machines don't crash in the first couple minutes, then they settle down and work fine...
Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...
Bulk buying is probably hard, but ex-enterprise Intel 10G on eBay tends to be pretty inexpensive. Dual spf+ x520 cards are regularly available for $10. Dual 10g-base-t x540 cards run a bit more, with more variance, $15-$25. No 2.5/5Gb support, but my 10g network equipment can't do those speeds either, so no big deal. These are almost all x8 cards, so you need a slot that can accomidate them, but x4 electrical should be fine (I've seen reports that some enterprise gear has trouble working properly in x1/x4 slots beyond bandwidth restrictions which shouldn't be a problem; if a dual port card needs x8 and you only have x4 and only use a single port, that should be fine)
I think all of mine can pxeboot, but sometimes you have to fiddle with the eeprom tools, and they might be legacy only, no uefi pxe, but that's fine for me.
And you usually have to be ok with running them with no brackets, cause they usually come with low profile brackets only.
x520s with full-height brackets do exist (I have a box full of them), but you may pay like $3-5/ea more than the more common lo-pro bracket ones. If you're willing to pop the bracket off, you can also find full-height brackets standalone and install your own.
Also, in general: in my experience avoiding 10gbe rj45 is very worthwhile. More expensive, more power consumption, more heat generation. If you can stick a sfp+ card in something, do it. IMO 10gbe rj45 is only worthwhile when you've got a device that supports it but can't easily take a pcie nic, like some intel NUCs.
I think my muni fiber install happening this week might have a 10G-baseT handoff, and I've got a port for that open on my switch in the garage. If that works out, that will be neat, but I'll need to upgrade some more stuff to make full use of that.