I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.
In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.
In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.
This is what I do. You can do Tailscale like access using things like Pangolin[0].
You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.
> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.
This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].
[0] https://github.com/fosrl/pangolin
[1] >>46136026
You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.
With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.
With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.
Behind a VPN your only attack surface is the VPN which is generally very well secured.
Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
There was a popular post less than a month ago about this recently >>46305585
I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.
I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.
I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.
I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.
But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?
Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.
This incident precisely shows that containerization worked as intended and protected the host.
Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.
Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.
The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.
https://github.com/jwhited/wgsd
https://www.jordanwhited.com/posts/wireguard-endpoint-discov...
Never again, it takes too much time and is too painful.
Certs from Tailscale are reason enough to switch, in my opinion!
The key with successful self hosting is to make it easy and fast, IMHO.
I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.
Your eventual connection is direct to your device, but all the management before that runs on Tailscales server.
But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.
Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).
Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).
That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.
The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.
I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.
Tmate is a wonderful service if you have home networks behind nat's.
I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source
Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv
If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.
Virtual machines are the intended use case for that. But they can be full of friction at time.
If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/
It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.
I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.
I prefer to hide my port instead of using F2B for a few reasons.
1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.
2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.
3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.
Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.
there are some well respected compute providers as well which you can use and for very low amount, you can sort of offload this worry to someone else.
That being said, VM themselves are good enough security box too. I consider running VM's even on your home server with public facing strategies usually allowable
- Each device? This means setting up many peers on each of your devices
- Router/central server? That's a single point of failure, and often a performance bottleneck if you're on LAN. If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.
Not to mention DDNS can create significant downtime.
Tailscale fails over basically instantly, and is E2EE, unlike the hub setup.
Tailscale really is superior here if you use tailnet lock. Everything always stays encrypted, and fails over to their encrypted relays if direct connection is not possible for various reasons.
> Router/central server? That's a single point of failure
Your router is a SPOF regardless. If your router goes down you can't reach any nodes on your LAN, Tailscale or otherwise. So what is your point?
> If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.
Secure your router. This is HN, not advice for your mom.
> Not to mention DDNS can create significant downtime.
Set your DNS ttl correctly and you should experience no more than a minute of downtime whenever your public IP changes.
A lot of people are behind CGNAT or behind a non-configurable router, which is an abomination.
> Secure your router
A typical router cannot be secured against physical access, unlike your servers which can have disk encryption.
> Your router is a SPOF regardless
Tailscale will keep your connection over a downstream switch, for example. It will not go through the router if it doesn't have to. If you use it for other usecases like kdeconnect synchronizing clipboard between phone and laptop, that will also stay up independent of your home router.
I would argue OpenVPN is easier. I currently run both (there are some networks I can’t use UDP on, and I haven’t bothered figuring out how to get wireguard to work with TCP), and the OpenVPN initial configuration was easier, as is adding clients (DHCP, pre-shared cert+username/password).
This isn’t to say wireguard is hard. But imo OpenVPN is still easier - and it works everywhere out of the box. (The exception is networks that only let you talk on 80 and 443, but you can solve that by hosting OpenVPN on 443, in my experience.)
This is all based on my experience with opnsense as the vpn host (+router/firewall/DNS/DHCP). Maybe it would be a different story if I was trying to run the VPN server on a machine behind my router, but I have no reason to do so - I get at least 500Mbps symmetrical through OpenVPN, and that’s just the fastest network I’ve tested a client on. And even if that is the limit, that’s good enough for me, I don’t need faster throughput on my VPN since I’m almost always going to be latency limited.
There's also no particular reason to think Home Assistant's authentication has to have a weakness point.
And your devices are also unlikely to start a fire just by being turned on and off, if that's your fear you should replace them at once because if they catch fire it doesn't matter if it's an attacker or yourself turning them on and off.
I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale
If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.
Well just use headscale and you'll have control over everything.
I think that has more potential for problems than turning lights on and off and warrants strong security.
In the same theory, someone would need your EC SSH key to do anything with an exposed SSH port.
Practice is a separate question.
Similar here, I only build & run services that I trust myself enough to run in a secure manner by themselves. I still have a VPN for some things, but everything is built to be secure on its own.
It's quite a few services on my list at this point and really don't want to have a break in one thing lead to a break in everything. It's always possible to leave a hole in one or two things by accident.
On the other side this also means I have a Postgres instance with TCP/5432 open to the internet - with no ill effects so far, and quite a bit of trust it'll remain that way, because I understand its security properties and config now.
the new problem is now my isp uses cgnat and there's no easy way around it
tailscale avoids all that, if i wanted more control i'd probably use headscale rather than bother with raw wireguard
- w for the wireguard network. - h for the home network.
Nothing fancy, just populate the /etc/hosts on every machine with these names.
Now, it's up to me to connect to my server1.h or server1.w depending whether I am at home or somewhere else.
Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.
Sure, tailscale is nice, but from an open-port-on-the-net perspective it's probably a bit below just opening wireguard.
My ISP-provided router (Free, in France) has WG built-in. But other than performance being abysmal, its main pain point is not supporting subnet routing.
So if all you want is to connect your phone / laptop while away to the local home network, it's fine. If you want to run a tunnel between two locations with multiple IPs on the remote side, you're SoL.
I mitigate this by having a dedicated machine on the border that only does routing and firewalling, with no random services installed. So anything that helpfully opens ports on internal vms won't automatically be reachable from the outside.
But what can you expect from people who provide services but won't even try to understand how they work and how they are configured as it's 'not fun enough', expecting claude code to do it right for them.
Asking AI to do thing you did 100 times before is OK I guess. Asking AI to do thing you never did and have no idea how it's properly done - not so much I'd say. But this guy obviously does not signal his sysadmin skills but his AI skills. I hope it brings him the result he aimed for.
Having a single port open for VPN access seems okay for me. That's what I did, But I don't want an "etc" involved in what has direct access to hardware/services in my house from outside.
So yeah, the lesson there is that if you have a port open to the internet, someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.
These days, that seems insane.
As the traffic grew, as speeds increased, licensing became necessary.
I think, these days, we're almost into that category. I don't say this happily. But having unrestricted access seems like an era coming to an end.
I realise this seems unworkable. But so was the idea of a driver's license. Sometimes society and safety comes first.
I'm willing to bet that in under a decade, something akin to this will happen.
It's worth an assessment of what you _think_ running ssh on a nonstandard port protects you against, and what it's actually doing. It won't stop anything other than the lightest and most casual script-based shotgun attacks, and it won't help you if someone is attempting to exploit an actual-for-real vuln in the ssh authentication or login process. And although I'm aware the plural of "anecdote" isn't "data," it sure as hell didn't reduce the volume of login attempts.
Public key-only auth + strict allowlists will do a lot more for your security posture. If you feel like ssh is using enough CPU rejecting bad login attempts to actually make you notice, stick it behind wireguard or set up port-knocking.
And sure, put it on a nonstandard port, if it makes you feel better. But it doesn't really do much, and anyone hitting your host up with censys.io or any other assessment tool will see your nonstandard ssh port instantly.
Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.
One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.
Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.
The point of a driver's license is that driving a ton of steel around at >50mph presents risk of harm to others.
Not knowing how to use a computer - driving it "poorly" - does not risk harm to others. Why does it merit restriction, based on the topic of this post?
It’s why Cloudflare exists, which in itself is another form of harm, in centralising a decentralised network.
1. "Unpatched servers become botnet hosts" - true, but Tailscale does not prevent this. A compromised machine on your tailnet is still compromised. The botnet argument applies regardless of how you access your server.
2. Following this logic, you would need to license all internet-connected devices: phones, smart TVs, IoT. They get pwned and join botnets constantly. Are we licensing grandma's router?
3. The Cloudflare point undermines the argument: "botnets cause centralization (Cloudflare), which is harm", so the solution is... licensing, which would centralize infrastructure further? That is the same outcome being called harmful.
4. Corporate servers get compromised constantly. Should only "licensed" corporations run services? They already are, and they are not doing better.
Back to the topic: I have no clue what you think Tailscale is, but it does increase security, only convenience.
You can trust BugCorp all you want but there are more sshd processes out there than tailnets and the scrutiny is on OpenSSH. We are not comparing sshd to say WordPress here. Maybe when you don’t over engineer a solution you don’t need to spend 100x the resources auditing it…
Are you saying "unlicensed" where you mean "untrained?"
It's always perplexing to me how HN commenters replying to a comment with a statement like this, e.g., something like "I prefer [choice with some degree of DIY]", will try to "argue" against it
The "arguments" are rarely, "I think that is a poor choice because [list of valid reasons]"
Instead the responses are something like, "Most people...". In other words, a nonsensical reference to other computer users
It might make sense for a commercial third party to care about what other computer users do, but why should any individual computer user care what others do (besides genuine curiosity or commercial motive)
For example, telling family, friends, colleagues how you think they should use their computers usually isn't very effective. They usually do not care about your choices or preferences. They make their own
Would telling strangers how to use their computers be any more effective
Forum commenters often try to tell strangers what to do, or what not to do
But every computer user is free to make their own choices and pursue their own preferences
NB. I am not commenting on the open ports statement
but actually it's worse. this is HN - supposedly, most commenters are curious by nature and well versed into most basic computer stuff. in practice, it's slowly less and less the case.
worse: what is learned and expected is different from what you'd think.
for example, separating service users sure is better than nothing, but the OS attack surface as a local user is still huge, hence why we use sandboxes, which really are just OS level firewalls to reduce the attack surface.
the open port attack surface isnt terrible though: you get a bit more of the very well tested tcp/ip stack and up to 65k ports all doing the exact same thing, not terrible at all.
Now, add to it "AI" which can automatically regurgitate and implement whatever reddit and stack overflow says.. it makes for a fun future problem - such forums will end up with mostly non-new AI content (new problem being solved will be a needle in the haystack) - and - users will have learned that AI is always right no matter what it decides (because they don't know any better and they're being trained to blindly trust it).
Heck, i predict there will be a chat, where a bunch of humans will argue very strongly that an AI is right while its blatantly wrong, and some will likely put their life on the line to defend it.
Fun times ahead. As for my take: humans _need_ learning to live, but are lazy. Nature fixes itself.
That's fine, it's only people knocking on a closed door. You cannot host things such as email or HTTP without open ports, your service needs to be publicly accessible by definition.
The mid-level and free tiers aren't necessarily going to help, but the Pro/Max/Heavy tier can absolutely make setting up and using wireguard and having a reasonably secure environment practical and easy.
You can also have the high tier models help with things like operating a FreePBX server and VOIP, manage a private domain, and all sorts of things that require domain expertise to do well, but are often out of reach for people who haven't gotten the requisite hands on experience and training.
I'd say that going through the process of setting up your self hosting environment, then after the fact asking the language model "This is my environment: blah, a, b, c, x, y, z, blah, blah. What simple things can I do to make it more secure?"
And then repeating that exercise - create a chatgpt project, or codex repo, or claude or grok project, wherein you have the model do a thorough interrogation of you to lay out and document your environment. With that done, you condense it to a prompt, and operate within the context where your network is documented. Then you can easily iterate and improve.
Something like this isn't going to take more than a few 15 minute weekend sessions each month after initially setting it up, and it's going to be a lot more secure than the average, completely unattended, default settings consumer network.
You could try to yolo it with Operator or an elevated MCP interface with your system, but the point is, those high tier models are sufficiently good enough to make significant self hosting easily achievable.
If someone breaks regs, you want to be able to levy fines or jail. If they do it a lot, you want an inability to drive at all.
It's about regulating poor drivers. And yes, initially vetting a driver too.
I don't think it's about driving ability, besides the initial vetting.
As an aside, I dislike tailscale, and use wireguard directly.
Back to the topic: Your connected device can harm others if used poorly. I am not proposing licensing requirements.
The few things I self host I keep out in the open. etcd, Kubernetes, Postgres, pgAdmin, Grafana and Keycloak but I can see why someone would want to hide inside a private network.
As someone who spent decades implementing and securing networks and internet-facing services for corporations large and small as well as self-hosting my own services for much of that time, the primary lesson I've learned and tried to pass on to clients, colleagues and family is:
If you expose it to the Internet, assume it will be pwned at some point.
No, that's not universally true. But it's a smart assumption to make for several reasons:1. No software is completely bug free and those bugs can expose your service(s) to compromise;
2. Humans (and their creations) are imperfect and will make mistakes -- possibly exposing your service(s) to compromise;
3. Bad actors, ranging from marginally competent script kiddies to master crackers with big salaries and big budgets from governments and criminal organizations are out there 24x7 trying to break into whatever systems they can reach.
The above applies just as much to tailscale or wireguard as it does to ssh/http(s)/imap/smtp/etc.
I'll say it again as it's possibly the most important concept related to exposing anything:
If you expose it to the Internet, assume that, at some point, it will be
compromised and plan accordingly.
If you're lucky (and good), it may not happen while you're responsible for it, but assuming it will and having a plan to mitigate/control an "inevitable" compromise will save your bacon much better than just relying on someone else's code to never break or have bugs which put you at risk.Want to expose ports? Use Wireguard? Tailscale? HAProxy? Go for it.
And do so in ways that meet your requirements/use cases. But don't forget to at least think (better yet script/document) about what you will do if your services are compromised.
Because odds are that one day they will.
Understand, I am not advocating this. I said I did not like it. Neirher of those statements have anything totk do with whether I think it will come to pass, or not.
Any one of those components might be exploitable, but to get my data you'd have to exploit all of them.
LXC isolation protects Proxmox from container escapes, not services from each other over the network. Full disk encryption protects against physical theft, not network attacks while running.
And if Nextcloud has passkeys, HTTPS, and proper auth, what is Tailscale adding exactly? What is the point of this setup over the alternative? What threat does this stop that "hardened Nextcloud, exposed directly" does not? It is complexity theater. Looks like defense in depth, but the "layers" are network hops, not security boundaries.
It's slow to scan due to ICMP ratelimiting, but you can parallelize.
(Sure, you can disable / firewall drop that ICMP error… but then you can do the same thing with TCP RSTs.)
I'm sorry, what?
Add the generated Wireguard key to any device (laptops, phones, etc) and access your home LAN as if it was local from anywhere in the world for free.
Works well, super easy to setup, secure, and fast.
If you're blanket dropping all ICMP errors, you're breaking PMTUD. There's a special place reserved in hell for that.
(And if you're firewalling your ICMP, why aren't you firewalling TCP?)