zlacker

[parent] [thread] 1 comments
1. loxias+(OP)[view] [source] 2022-07-09 05:32:28
> I used to serve DNS data over a localhost authoritative server. Now I store most DNS data in a localhost forward proxy.

I run my own authoritative DNS on my router (tho not localhost. interesting), and have for a long time (since I started traffic shaping to push the ACKs to the front). Like you, I've also enjoyed having superior performance over those using public servers. Everyone says "but you can use 8.8.8.8 or 1.1.1.1! they're fast!." and I (we?) smile and nod.

Just did a quick little test for this comment. Resolving with 8.8.8.8 is fast! And... also between 800% and 2500% slower than using my (and your) setup. high five

Also, the haters don't know something that we do, which is that... sometimes 8.8.8.8 doesn't work!!!

A few weeks ago there was a website I couldn't access from a computer using 8.8.8.8. I thought, "that's odd", used dig, and it didn't resolve. From the same network I tried a different resolver -- worked. Tried 8.8.8.8 again -- fail. sshed a few hundred miles away to check 8.8.8.8 again -- working. tcpdump on the router, watched 8.8.8.8 fail to resolve in front of my eyes. About 4 minutes later, back to normal. "yes, sometimes the internet so-called gods fail."

I'm quite curious why you changed from an full authoritative setup to a proxying one. I've skimmed a handful of your past posts and agreed entirely, so we're both "right", or both wrong/broken-brained in the same way. ;-)

Is there something I could be doing to improve my already fantastic setup?

replies(1): >>1vuio0+ja
2. 1vuio0+ja[view] [source] 2022-07-09 07:24:33
>>loxias+(OP)
Using a forward proxy and mapped addresses instead of doing DNS lookups is just a phase in a long series of steps to eliminate the use of third party DNS service, i.e., shared caches,^1 then eliminate unnecessary DNS queries,^2 and finally eliminate the use of DNS altogther. However there are other reasons I use the proxy, namely control over TLS and HTTP.

1. This goes back to 2008 and "DNS cache poisoning". Easiest way to avoid it was to not use shared caches.

2. I created a fat stub resolver^3 that stored all the addresses for TLD nameservers, i.e., what is in root.zone,^4 instead the binary. This reduces the number of queries for any lookup by one. I then used this program to resolve names without using recursion, i.e., using only authoritative servers and RD bit unset. Then I discovered patterns in the different permutations of lookups to resolve names, i.e., common DNS (mis)configurations. I found I could "brute force" lookups by trying the fastest permutations or most common ones first. I could beat the speed of a cache for names not already in the cache. I could beat the speed of 8.8.8.8 or a local cache for names not already in the cache.

3. Fat for the time. It is tiny compared to today's Go and Rust binaries.

4. Changes to root.zone were rare. Changes are probably more common today what with all the gTLDs but generally will always be relatively infrequent. Classic example of DNS data that is more static than dynamic.

[go to top]