I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
Of course all languages can produce insecure binaries, but C/C++ buffer overflows and similar vulnerabilities are likely what AlgebraFox refers to.
Either way, nothing is full proof. It is part of a solid defense in depth [1].
[1] https://en.m.wikipedia.org/wiki/Defense_in_depth_(computing)
Been using it for years and it’s been solid.
@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).
https://github.com/containers/podman/blob/main/docs/tutorial...
Edit: It can also happen with the official image: https://www.postgresql.org/about/news/cve-2019-9193-not-a-se...
See https://www.postgresql.org/docs/current/sql-copy.html#id-1.9...
Specifically the `filename` and `PROGRAM` parameters.
And that is documented expected out of the box behaviour without even looking for an exploit...
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
After moving from bare to compose to docker-compose to podman-compose and bunch of things in-between (homegrown Clojure config-evaluators, ansible, terraform, make/just, a bunch more), I finally settled on using Nix for managing containers.
It's basically the same as docker-compose except you get to do it with proper code (although Nix :/ ) and as a extra benefit, get to avoid YAML.
You can switch the backend/use multiple ones as well, and relatively easy to configure as long as you can survive learning the basics of the language: https://wiki.nixos.org/wiki/Docker
There is bunch of software that makes this easier than trivial too, one example: https://github.com/g1ibby/auto-vpn/
UPD: hmm, seems quite promising - https://chat.mistral.ai/chat/1d8e15e9-2d1a-48c8-be3a-856254e...
If you have opened up a port in your network to the public, the correct assumption is to direct outside connections to your application as per your explicit request.
I am running Immich on my home server and want to be able to access it remotely.
I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].
From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.
Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?
Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.
I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.
Ways I would improve my security:
- Adding a WAF (ModSecurity) to NGINX - big time investment!
- Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.
Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:
https://security.stackexchange.com/questions/252480/blocking...
More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.
That option has nothing to do with the problem at hand.
https://docs.docker.com/reference/compose-file/networks/#ext...
I encountered it with Docker on NixOS and found it confusing. They have since documented this behavior: https://search.nixos.org/options?channel=24.11&show=virtuali...
Yeah I wouldn't do this personally, I just mentioned it as the simplest option. Unless it's meant to be a public service, I always try to at least hide it from automated scanners.
> If I use nginx as a reverse proxy, would I be mitigating the risk?
If the reverse proxy performs additional authentication before allowing traffic to pass onto the service it's protecting, then yes, it would.
One of my more elegant solutions has been to forward a port to nginx and configure it to require TLS client certificate verification. I generated and installed a certificate on each of my devices. It's seamless for me in day to day usage, but any uninvited visitors would be denied entry by the reverse proxy.
However support for client certificates is spotty outside of browsers, across platforms, which is unfortunate. For example HomeAssistant on Android supports it [1] (after years of pleading), but the iOS version doesn't. [2] NextCloud for iOS however supports it [3].
In summary, I think any kind of authentication added at the proxy would be great for both usability and security, but it has very spotty support.
> Based on other advice, it seems like the self hosted VPN (wireguard) is the safest option, but slower.
I think so. It shouldn't be slow per se, but it's probably going to affect battery life somewhat and it's annoying to find it disconnected when you try to access Immich or other services.
[1] https://github.com/home-assistant/android/pull/2526
[2] https://community.home-assistant.io/t/secure-communication-c...
practically all these attacks require downloading remote files to the server once they gain access, using curl, wget or bash.
Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.
Also these cryptominers are usually dropped to /tmp, /var/tmp or /dev/shm. They need internet access to work, so again, restricting outbound connections per binary usually mitigates these issues.
https://www.aquasec.com/blog/kinsing-malware-exploits-novel-...
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
It’s well-intentioned, but I honestly believe that it would lead to a plethora of security problems. Maybe I am missing something, but it strikes me as on the level of irresponsibility of handing out guardless chainsaws to kindergartners.
Upd: thanks for a link, looks quite bad. I am now thinking that an adjacent VM in a provider like Hetzner or Contabo could be able to pull it off. I guess I will have to finally switch remaining Docker installations to Podman and/or resort to https://firewalld.org/2024/11/strict-forward-ports
However, security is hard and people will drop interest in your project if it doesn't work automatically within five minutes.
The hard part is at what experience level the warnings can stop. Surely developer documentation doesn't need the "docker exposes ports by default" lesson repeated every single time, but there are a _lot_ of "beginner" tutorials on how to set up software through containers that ignore any security stuff.
For instance, when I Google "how to set up postgres on docker", this article was returned, clearly aimed at beginners: https://medium.com/@jewelski/quickly-set-up-a-local-postgres... This will setup a simply-guessable password on both postgres and pgadmin, open from the wider network without warning. Not so bad when run on a VM or Linux computer, quite terrible when used for a small project on a public cloud host.
The problems caused by these missing warnings are almost always the result of lacking knowledge about how Docker configures it networks, or how (Linux) firewalls in general work. However, most developers I've met don't know or care about these details. Networking is complicated beyond the bare basics and security gets in the way.
With absolutely minimal impact on usability, all those guides that open ports to the entire internet can just prepend 127.0.0.1 to their port definitions. Everyone who knows what they're doing will remove them when necessary, and the beginners need to read and figure out how to open ports if they do want them exposed to the internet.
well, that's what I opened with: >>42601673
problem is, I was told in >>42604472 that this protection is easier to work around than I imagined...
if theres defense in depth it may be worth checking out L2 forwarding within a project for unexpected pivots an attacker could use. we've seen this come up in pentests
I work on SPR, we take special care in our VPN to avoid these problems as well, by not letting docker do the firewalling for us. (one blog post on the issue: https://www.supernetworks.org/pages/blog/docker-networking-c...).
as an aside there's a closely related issue with one-hop attacks with conntrack as well, that we locked down in October.
As a rule of thumb, I will gladly pass on Tor traffic, but no exit node, and I understand if network admins want to block entry node, too. It is a decision everyone who maintains a network has to make themselves.
The reason I block it is also the same reason I block banana republics like CN and RU: these don't prosecute people who break the law with regards to hacking. Why should one accept unrestricted traffic from these?
In the end, the open internet was once a TAZ [1] and unfortunately with the commercialization of the internet together with massive changes in geopolitics the ship sailed.
[1] https://en.m.wikipedia.org/wiki/Temporary_Autonomous_Zone
> My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees.
"frivolous newsletters" -- Thank you for your honesty!Real question: One million employees!? Even Foxconn doesn't have one million employees. That leaves only Amazon and Walmart according to this link: https://www.statista.com/statistics/264671/top-50-companies-...