Worse, the linked bug report is from a DECADE ago, and the comments underneath don't seem to show any sense of urgency or concern about how bad this is.
Have I missed something? This seems appalling.
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
As someone says in that PR, "there are many beginners who are not aware that Docker punches the firewall for them. I know no other software you can install on Ubuntu that does this."
Anyone with a modicum of knowledge can install Docker on Ubuntu -- you don't need to know a thing about ufw or iptables, and you may not even know what they are. I wonder how many machines now have ports exposed to the Internet or some random IoT device as a result of this terrible decision?
As for it not being explicitly permitted, no ports are exposed by default. You must provide the docker run command with -p, for each port you want exposed. From their perspective, they're just doing exactly what you told them to do.
Personally, I think it should default to giving you an error unless you specified what IPs to listen to, but this is far from a big of an issue as people make it out to be.
The biggest issue is that it is a ginormous foot gun for people who don't know Docker.
Maybe it's the difference between "-P" and "-p", or specifying both "8080:8080" instead of "8080", but there is a difference, especially since one wouldn't be reachable outside of your machine and the other one would be on worse case trying to bind 0.0.0.0.
For people unfamiliar with Linux firewalls or the software they're running: maybe. First of all, Docker requires admin permissions, so whoever is running these commands already has admin privileges.
Docker manages its own iptables chain. If you rely on something like UFW that works by using default chains, or its own custom chains, you can get unexpected behaviour.
However, there's nothing secret happening here. Just listing the current firewall rules should display everything Docker permits and more.
Furthermore, the ports opened are the ones declared in the command line (-p 1234) or in something like docker-compose declarations. As explained in the documentation, not specifying an IP address will open the port on all interfaces. You can disable this behaviour if you want to manage it yourself, but then you would need some kind of scripting integration to deal with the variable behaviour Docker sometimes has.
From Docker's point of view, I sort of agree that this is expected behaviour. People finding out afterwards often misunderstand how their firewall works, and haven't read or fully understood the documentation. For beginners, who may not be familiar with networking, Docker "just works" and the firewall in their router protects them from most ills (hackers present in company infra excluded, of course).
Imagine having to adjust your documentation to go from "to try out our application, run `docker run -p 8080 -p 1234 some-app`" to "to try out our application, run `docker run -p 8080 -p 1234 some-app`, then run `nft add rule ip filter INPUT tcp dport 1234 accept;nft add rule ip filter INPUT tcp dport 8080 accept;` if you use nftables, or `iptables -A INPUT -p tcp --dport 1234 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT; iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT` if you use iptables, or `sudo firewall-cmd --add-port=1234/tcp;sudo firewall-cmd --add-port=8080/tcp; sudo firewall-cmd --runtime-to-permanent` if you use firewalld, or `sudo ufw allow 1234; sudo ufw allow 8080` if you use UFW, or if you're on Docker for Windows, follow these screenshots to add a rule to the firewall settings and then run the above command inside of the Docker VM". Also don't forget to remove these rules after you've evaluated our software, by running the following commands: [...]
Docker would just not gain any traction as a cross-platform deployment model, because managing it would be such a massive pain.
The fix is quite easy: just bind to localhost (specify -p 127.0.0.1:1234 instead of -p 1234) if you want to run stuff on your local machine, or an internal IP that's not routed to the internet if you're running this stuff over a network. Unfortunately, a lot of developers publishing their Docker containers don't tell you to do that, but in my opinion that's more of a software product problem than a Docker problem. In many cases, I do want applications to be reachable on all interfaces, and having to specify each and every one of them (especially scripting that with the occasional address changes) would be a massive pain.
For this article, I do wonder how this could've happened. For a home server to be exposed like that, the server would need to be hooked to the internet without any additional firewalls whatsoever, which I'd think isn't exactly typical.
This the default value for most aspects of Docker. Reading the source code & git history is a revelation of how badly things can be done, as long as you burn VC money for marketing. Do yourself a favor and avoid all things by that company / those people, they've never cared about quality.
If you just run a container, it will expose zero ports, regardless of any config made in the Docker image or container.
The way you're supposed to use Docker is to create a Docker network, attach the various containers there, and expose only the ports on specific containers that you need external access to. All containers in any network can connect to each other, with zero exposed external ports.
The trouble is just that this is not really explained well for new users, and so ends up being that aforementioned foot gun.
I sympathize with your reluctance to push a burden onto the users, but I disagree with this example. That's a false dichotomy: whatever system-specific commands Docker executes by default to allow traffic from all interfaces to the desired port could have been made contingent on a new command parameter (say, --open-firewall). Removing those rules could have also been managed by the Docker daemon on container removal.