You can use that to
- test weird dns setups
- to issue proper TLS certificates (you can do that technically, but it's less known fact and some services like let's encrypt forbid that as their rule)
- to utilize single IP and same port for multiple services (so just a common host/server configuration on typical reverse proxy, optionally with SNI to be used with TLS on top.
ICANN has said they will never delegate .internal and it should be used for these kinds of private uses.
I'm a coauthor on this Internet draft so I'm ofc rather biased.
because in the tradition there was a (forwarding) dns server somewhere in the local network to do caching for everybody.
nowadays most decent linux distributions have a very good caching dns resolver (systemd-resolved) so that's not an issue anymore.
To resolve names, you can ask /etc/hosts for the name / IP conversion; you can also ask DNS, or ldap or NIS; probably there are many I've forgotten about.
solaris: https://docs.oracle.com/cd/E19683-01/806-4077/6jd6blbbe/inde...
glibc: https://man7.org/linux/man-pages/man5/nsswitch.conf.5.html
musl appears to not have an nsswitch.conf or a way to configure name to number resolution behavior?
The fact that this model is still largely assumed is due to inertia.
Im not sure i like the public internet with ip certs. I do it at home because sometimes dns be down and have some good internal uses. But, shouldnt be public. Imagine firing up a /24 on linode, requesting certs on every ip, then releasing the ips, and saving the certs. Another linode account would later get an ip in that range, and then you can freely mitm the linode site by ip. Im making a number of 'magical' things in between this, of course, but, it seems allowing an IP from a public CA could be a terrible thing. The only saving grace in this case is the short lifetime of the certs, however, im not a fan of that either.
As an aside, im starting to get squinty eyes relating to LE, both things they announce in that article, are things that greatly affect the internet at large. I see it as something google would pull to ensure dominance by lock-in. Sorry you can no longer change SSL providers because certs only live a few minutes now, and of course you cant afford to not have a cert or no one will see your site. Im exaggerating slightly, but these changes are not something i think should be allowed, and LE shouldve listened to everyone yelling. Sure, allow down to 6 day certs, but that will surely become the maximum soon.
Having a daemon would add complexity, take up RAM and CPU, and be unnecessary in general. There really weren't that many daemons running in the olden times.
DNS resolution is expected to be fast, since it's (supposed to be) UDP-based. It's also expected that there is a caching DNS resolver somewhere near the client machines to reduce latency and spread load (in the old days the ISP would provide them, then later as "home routers" became a thing, the "home router" provided them too).
Finally, as networks were fairly slow, you just didn't do a ton of network connections, so you shouldn't be doing a ton of DNS lookups. But even if you did, the DNS lookup was way faster than your TCP connection, and the application could cache results easily (I believe Windows did cache them in its local resolver, and nscd did on Linux/Unix)
If you really did need DNS caching (or anything else DNS related), you would just run Bind and configure it to your needs. Configuring Bind was one of the black arts of UNIX so that was avoided whenever possible :)
That's a bit of a stretch to say anyone agreed on not using IP based certs. Quite the contrary. It is present in RFC 5280 and SAN can contain an IP. It's just very rare to do that, but can be done and is done. Modern browsers and OSs accept it as well.
It's nice when you need to do some cert pinning to make sure there is not MITM eavesdropping, or for example on some onprem environments where you can't fully control workstations/DNS of you user endpoints, but still want to have your services behind certs that actually properly validate.
That is how I feel about the takeover of the .local domain for mDNS. Why step in and take a widely used suffix that is shorter for something that will almost always be automated, instead of taking something longer to leave us alone with our .local setups. I will not forgive, I will not forget!
Does reusing it cause any problem for the mDNS, or does mDNS usage cause problem for the internal-domains usage?
And then when you reconfigure it, depending on the stack it won't bother querying mDNS at all if a DNS resolver responds.
You don't have enough nodes for CRL size to become a problem, and if a node does get compromised you're hardly going to leave it up and running for a year (i.e. you'd obviously kill the node, and the cert is useless without control of the DNS name).
EDIT: the other direction to go of course is way shorter. Like do you need a certificate with a lifetime longer then business hours before renewal?
Yeah, not sure why that got approved in the first place. Sure, it wasn't officially part of any of the protected/reserved names or whatever when it got bought, but Google shouldn't have been allowed to purchase it at all since it was in use already for non-public stuff. That they also require HTST just in order to break existing setups is just salt on the wounds.
How many different programs in the same process space hit so many common external services individual caching of names is not sufficient?
Article lists a bunch of fun with systemd running junk in containers that seem counterproductive to me. A lot of systemd stuff seems to be stuff useful on a laptop that ends up where it's really not wanted.
Local dns caching seems like a solution looking for a problem to me. I disable it whereever I can. I have local(ish) dns caches on the network. But not inside lxc containers or Linux hosts.
There's also probably some savings around not having to convert between the structures used by gethostbyname and DNS questions&answers.