Sure. But without seeing the other sides argument, I have to wonder if their point wasn't that they're not designed to be stable for the purpose of identifying a service/thing on the Internet; things can and do move and change. Hardware failure is a good example of that. Just like a house address, those too are normally stable but people can & do move. Just with software, it's like we look our friend up in the white pages¹ prior to every visit, which one might not do in real life.
¹oh God I'm dating myself here.
1. Why do I state that. Because I kept reading about why DNS was created and always encountered the same parroted explanation, year after year. Something along the lines that IP addresses were constantly in flux. That may have been true when DNS was created and the www was young. But was it true today. I wanted to find out. I did experiments. I found I could use the same DNS data day after day, week after week, month after month, year after year.
Why would I care. Because by eliminating remote DNS lookups I was able to speed up the time it takes me to retrieve data from the www.^2 Instead of making the assumption that every site is going to switch IP addresses every second, minute, day or week, I assume that only a few will do that and most will not. I want to know about those sites that are changing their IP address. I want to know the reasons. When a site changes its IP address, I am alerted, as you see with today's change to HN's address. Whereas when people assume every site is frequently changing its IP address, they perform unnecesary DNS lookups for the majority of sites. That wastes time among other things. And, it seems, people are unaware when sites change addresses.
2. Another benefit for me is that when some remote DNS service does down (this has happened several times), I can still use the www without interruption. I already have the DNS data I need. Meanwhile the self-proclaimed "experts" go into panic mode.
Just run a DNS server locally configured to serve stale records if upstream is unavailable.
As for your first point, the same local DNS server would also provide you with lower/no latency.
I used to serve DNS data over a localhost authoritative server. Now I store most DNS data in a localhost forward proxy.
If "upstream" means third party DNS service to resolve names piecemeal while accessing the www, I do not do that.^1
1. I do utilise third party DoH providers for bulk DNS data retrieval. Might as well, because DoH allows for HTTP/1.1 pipelining. I get DNS data from a variety of sources, rather than only one.
2. If it were "BS" then that would imply I am trying to mislead or deceive. The reverse is true. I kept reading sources of information about the internet that were meant to have me believe that most DNS RRs are constantly changing. I gathered DNS data. The data suggested those sources, whether intentionally or not, could be misleading and deceptive. Most DNS RRs did not change. BS could even mean that I am lying. But if I were lying and the DNS RRs for the sites I access were constantly changing, then the system I devised for using stored DNS data would not work. That is false. It works. I have been using it for years.
I run my own authoritative DNS on my router (tho not localhost. interesting), and have for a long time (since I started traffic shaping to push the ACKs to the front). Like you, I've also enjoyed having superior performance over those using public servers. Everyone says "but you can use 8.8.8.8 or 1.1.1.1! they're fast!." and I (we?) smile and nod.
Just did a quick little test for this comment. Resolving with 8.8.8.8 is fast! And... also between 800% and 2500% slower than using my (and your) setup. high five
Also, the haters don't know something that we do, which is that... sometimes 8.8.8.8 doesn't work!!!
A few weeks ago there was a website I couldn't access from a computer using 8.8.8.8. I thought, "that's odd", used dig, and it didn't resolve. From the same network I tried a different resolver -- worked. Tried 8.8.8.8 again -- fail. sshed a few hundred miles away to check 8.8.8.8 again -- working. tcpdump on the router, watched 8.8.8.8 fail to resolve in front of my eyes. About 4 minutes later, back to normal. "yes, sometimes the internet so-called gods fail."
I'm quite curious why you changed from an full authoritative setup to a proxying one. I've skimmed a handful of your past posts and agreed entirely, so we're both "right", or both wrong/broken-brained in the same way. ;-)
Is there something I could be doing to improve my already fantastic setup?
1. This goes back to 2008 and "DNS cache poisoning". Easiest way to avoid it was to not use shared caches.
2. I created a fat stub resolver^3 that stored all the addresses for TLD nameservers, i.e., what is in root.zone,^4 instead the binary. This reduces the number of queries for any lookup by one. I then used this program to resolve names without using recursion, i.e., using only authoritative servers and RD bit unset. Then I discovered patterns in the different permutations of lookups to resolve names, i.e., common DNS (mis)configurations. I found I could "brute force" lookups by trying the fastest permutations or most common ones first. I could beat the speed of a cache for names not already in the cache. I could beat the speed of 8.8.8.8 or a local cache for names not already in the cache.
3. Fat for the time. It is tiny compared to today's Go and Rust binaries.
4. Changes to root.zone were rare. Changes are probably more common today what with all the gTLDs but generally will always be relatively infrequent. Classic example of DNS data that is more static than dynamic.