The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
I tried handwriting https://blog.jakesaunders.dev/schemaless-search-in-postgres/ bit I thought it came off as rambling.
Maybe I'll have a go at redrafting this tomorrow in non LLM-ese.
* but if you’re used to bind-mounting, they’ll be a hassle
Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
- [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/
> IT NEVER ESCAPED.
You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].
Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.
Also, see my comment on firewall: >>46306974
[0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": >>42603136 .
Networking is just better in podman.
That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].
I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
Also the Docker Compose tool is a well-know exception to the compatibility story. (There is some unofficial podman compose tool, but that is not feature complete and quadlets are better anyway.)
I agree with approaching podman as its own thing though. Yes, you can build a Dockerfile, but buildah lets you build an efficient OCI image from scratch without needing root. For those interested, this document¹ explains how buildah compares to podman and docker.
1. https://github.com/containers/buildah/tree/main/docs/contain...
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
There is nothing wrong with this article. Please continue to write as you; it's what people came for.
LLMs have their place. I find it useful to prompt an LLM to fix typos and outright errors and also prompt them to NOT alter the character or tone of the text; they are extraordinarily good at that.
https://support.broadcom.com/web/ecx/support-content-notific...
https://nvd.nist.gov/vuln/detail/CVE-2019-5183
https://nvd.nist.gov/vuln/detail/CVE-2018-12130
https://nvd.nist.gov/vuln/detail/CVE-2018-2698
https://nvd.nist.gov/vuln/detail/CVE-2017-4936
In the end you need to configure it properly and pray there's no escape vulnerabilities. The same standard you applied to containers to say they're definitely never a security boundary. Seems like you're drawing some pretty arbitrary lines here.
> If your app’s React code does not use a server, your app is not affected by this vulnerability. If your app does not use a framework, bundler, or bundler plugin that supports React Server Components, your app is not affected by this vulnerability.
https://react.dev/blog/2025/12/03/critical-security-vulnerab...
So if you have a backend that supports RSC, even if you don't use it, you can still be vulnerable.
GP said they only shipped front ends but that can mean a lot.
Edit:link
https://nvd.nist.gov/vuln/detail/CVE-2025-29927
That plus the most recent react one, and you have a culture that does not care for their customers but rather chasing fads to help greedy careers.
https://www.statista.com/statistics/420400/spam-email-traffi...
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
[0] https://aws.amazon.com/compliance/shared-responsibility-mode...
That's the point, its private by design and unless they tell you, nobody will ever know how much they use and for what. The true hacker spirit.
If you bother to look past news headlines you will find a vibrant community of people paying for legal goods that value privacy before FUD and ignorance.
This kind of fearmongering is already leading us towards a cashless society because "only criminals use it". This is hackernews and not facebook or congress so it should be obvious to everybody here what the end result of criminalizing/demonizing non KYC payments will be (hint: look at china).
You can do:
python3 -m http.server -b 127.0.0.1 8080
python3 -m http.server -b 127.0.0.2 8080
python3 -m http.server -b 127.0.0.3 8080
and all will be available.
Private network ranges don't really have the same purpose, they can be routed, you have to always consider conflicts and so on. But here with 127/8 you are in your own world and you don't worry about anything. You can also do tests where you need to expose more than 65k ports :)
You have to also remember these are things established likely before even DNS was a thing, IP space was considered so big that anyone could have a huge chunk of it, and it was mostly managed manually.
The docs you need for quadlets are basically here: https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
The one gotcha I can think of not mentioned there is that if you run it as a non-root user and want it to run without logging in as that user, you need to: `sudo loginctl enable-linger $USER`.
If you don't vibe with quadlets, it's equally fine to do a normal systemd .service file with `ExecStart=podman run ...`, which quadlets are just convenience sugar for. I'd start there and then return to quadlets if/when you find that becomes too messy. Don't add new abstraction layers just because you can if they don't help.
If you have a more complex service consisting of multiple containers you want to schedule as a single unit, it's also totally fine to combine systemd and compose by having `ExecStart=podman compose up ...`.
Do you want it to run silently in the background with control over autorestarts and log to system journal? Quadlets/systemd.
Do you want to have multiple containers scheduled together (or just prefer it)? Compose.
Do you want to manually invoke it and have the output in a terminal by default? CLI run or compose.
If you plug in a machine at home, it is behind the router, and behind the router's firewall.
If you want more of a firewall locally, something as simple as an EdgeRouter X can get you started easily with this excellent guide: https://github.com/mjp66/Ubiquiti
The nice thing about using cloudflare tunnel, is theres zero ports to expose, ever. The cloudflare tunnel app running on your local machine is what connects out to the internet and takes care of creating a secure connection between cloudflare and your machine.
If you want to forward more than one port to the machine, you could use something like cloudflare to forward to a machine on your home server, and then have the nginx proxy manager or something send the traffic around internally.
It's totally fine to start with cloudflare, and if you aren't already, something like Proxmox (youtube tutorials are pretty quick) gets you up and running and playing pretty quick. Feel free to ask any other questions you like.