My understanding is that when using containers updating is an ordeal and you avoid the need my never exposing the services to the internet.
You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Doing all that with containers is a spaghetti soup of custom scripts.
You're usually deep within a social bubble of some sort if you find yourself assuming otherwise.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
But we live where we live.