If a decentralised system is to stay decentralised, it needs to consider spammy bad actors.
Also, it was only up to like very early 2000s when researchers of decentralized systems mostly ignored the existence of malicious actors, but later everyone became well aware of them and started considering how to deal with them.
But in practice datacenters, uplinks and internet exchanges often are able to do flowspec, firewall rules, block all UDP for a subnet in all networks they have relationships with, etc. So plenty of those nodes can be behind ISPs that mitigate volumetric attacks automatically, so even simple DNS failover might be good enough to protect from such attacks. It's not that hard. Layer 7 is where the hard part is.
1. Having had ~25 servers per datacenter
2. 5 Datacenters (1 in Texas, 1 in Utah, 2 in California, 1 in Chicago)
3. 1 Server in each location connected 10gbit, the rest 1 gbit.
I got to watch first hand as DNS reflection attacks crippled our infrastructure one server at a time. Only 2 of the datacenters (1 in LA, 1 in Chicago) had the infrastructure to mitigate the DDoS without significantly effecting their operations. Even post mitigation, the 2 datacenters that didn't end-up blackhole-ing our IPs at the edge still let so much malicious traffic through that only the 2 10Gbit servers remained online and they were nearly CPU bound over 24 cores just handling all the SENDQ/RECVQ for the NIC.
I mention this because it's sometimes easy to dismiss until you're in the situation and the realities of what you have control over are vastly different from technically feasible. The size and scope of modern DDoS attacks can easily overwhelm entire uplinks to a datacenter, even after pushing mitigations upstream. The reason these reverse proxies from companies like Cloudflare have become so popular is because most will not have the raw resources required to mitigate this themselves. Even some larger datacenters don't have the resources.