zlacker

Nitro: A tiny but flexible init system and process supervisor

submitted by todsac+(OP) on 2025-08-22 19:06:29 | 250 points 95 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
5. lrvick+Uj[view] [source] 2025-08-22 21:00:58
>>todsac+(OP)
At Distrust, we wrote a dead simple init system in rust that is used by a few clients in production with security critical enclave use cases.

<500 lines and uses only the rust standard library to make auditing easy.

https://git.distrust.co/public/nit

6. axlee+Wj[view] [source] 2025-08-22 21:01:18
>>todsac+(OP)
I'd recommend changing names, nitro is already a semi-popular server engine for node.js https://nitro.build/
8. Ericso+7k[view] [source] 2025-08-22 21:02:21
>>todsac+(OP)
If I my plug my friend and colleague's work, https://nixos.org/manual/nixos/unstable/#modular-services has just landed in Nixpkgs.

This will be a game changer for porting to NixOS to new init systems, and even new kernels.

So, it's good time to be experimenting with things like Nitro here!

9. runako+gk[view] [source] 2025-08-22 21:03:13
>>todsac+(OP)
The name & function overlap with AWS Nitro is severe:

https://docs.aws.amazon.com/whitepapers/latest/security-desi...

10. stock_+ok[view] [source] 2025-08-22 21:04:04
>>todsac+(OP)
It will be interesting to compare this to dinit[1], which is used by chimera-linux.

Giving the readme a brief scan, it doesn't look like it currently handles service dependencies?

[1]: https://github.com/davmac314/dinit

◧◩
12. nine_k+Zk[view] [source] [discussion] 2025-08-22 21:07:44
>>Flux15+6j
S6 is way more complex and rich. Nitro or runit would be simpler alternatives; maybe even https://github.com/krallin/tini.
◧◩◪◨
29. MyOutf+3A[view] [source] [discussion] 2025-08-22 22:37:55
>>pas+St
Mostly just zombies and signal handlers.

And your software can do it, if it's written with the assumption that it will be pid1, but most non-init software isn't. And rather than write your software to do so, it's easier to just reach for something like tini that does it already with very little overhead.

I'd recommend reading the tini readme[0] and its linked discussion for full detail.

[0]: https://github.com/krallin/tini

◧◩◪◨⬒
40. westur+6O[view] [source] [discussion] 2025-08-23 00:18:27
>>mikepu+Yx
For workstations with GPUs and various kernel modules, rpm-ostree + GRUB + Native Containers for the rootfs and /usr and flatpaks etc on a different partition works well enough.

ostree+grub could be much better at handling failover like switches and rovers that then need disk space for at least two separate A/B flash slots and badblocks and a separate /root quota. ("support configuring host to retain more than two deployments" https://github.com/coreos/rpm-ostree/issues/577#issuecomment... )

Theoretically there's a disk space advantage to container layers.

Native Containers are bare-metal host images as OCI Images which can be stored in OCI Container Registries (or Artifact registries because packages too). GitHub, GitLab, Gitea, GCP, and AWS all host OCI Container/Artifact Registries.

From >>44401634 re bootc-image-builder and Native Containers and ublue-os/image-template, ublue-os/akmods, ublue-os/toolboxes w/ "quadlets and systemd" (and tini is already built-in to Docker and Podman) though ublue/bazzite has too many patches for a robot:

> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/

SBOM tools can scan hosts, VMs, and containers to identify software versions and licenses for citation and attribution. (CC-BY-SA requires Attribution if the derivative work is distributed. AGPL applies to hosted but not necessarily distributed derivative works. There's choosealicense.com , which has a table of open source license requirements in an Appendix: https://choosealicense.com/appendix/ )

BibTeX doesn't support schema.org/SoftwareApplication or subproperties of schema:identifier for e.g. the DOI URN of the primary schema.org/ScholarlyArticle and it's :funder(s).

...

ROS on devices, ROS in development and simulation environments;

Conda-forge and RoboStack host ROS Robot Operating System as conda packages.

RoboStack/ros-noetic is ROS as conda packages: https://github.com/RoboStack/ros-noetic

gz-sim is the new version of gazebosim, a simulator for ROS development: https://github.com/conda-forge/gz-sim-feedstock

From >>44372666 :

> mujoco_menagerie has Mujoco MJCF XML models of various robots.

Mujoco ROS-compatibility: https://github.com/google-deepmind/mujoco/discussions/990

Moveit2: https://github.com/moveit/moveit2 :

> Combine Gazebo, ROS Control, and MoveIt for a powerful robotics development platform.

RoboStack has moveit2 as conda packages with clearly-indicated patches for Lin/Mac/Win: ros-noetic-moveit-ros-visualization.patch: https://github.com/RoboStack/ros-noetic/blob/main/patch/ros-...

...

Devcontainer.json has been helpful for switching between projects lately.

devcontainer.json can reference a local container/image:name or a path to a ../Dockerfile. I personally prefer to build a named image with a Makefile, though vscode Remote Containers (devcontainers extension) can build from a Dockerfile and, if the devcontainer build succeeds, start code-server in the devcontainer and restart vscode as a client of the code-server running in the container so that all of the tools for developing the software can be reproducibly installed in a container isolated from the host system.

It looks like it's bootc or bootc-image-builder for building native container images?

bootc-image-builder: https://github.com/osbuild/bootc-image-builder

◧◩
60. hippos+lj1[view] [source] [discussion] 2025-08-23 05:53:53
>>stock_+ok
I used dinit in Artix Linux. It is lightweight and impressive (https://artixlinux.org/faq.php)
◧◩◪◨⬒
64. cbzbc+Gs1[view] [source] [discussion] 2025-08-23 07:52:36
>>kragen+sO
It does, because SIGTERM is traditionally understood as the trigger for a shutdown. Docker - for instance - will send a SIGTERM to pid 1 when a container is stopped - which goes back to a previous comment here about using a real init as pid 1 if the thing in your container forks: >>44990092
◧◩
69. fbarth+QE1[view] [source] [discussion] 2025-08-23 10:30:09
>>nine_k+0k
There's an appropriately minimal comparison with runit in her slides (PDF) from a talk she gave in 2024: https://leahneukirchen.org/talks/#nitroyetanotherinitsy
73. networ+vL1[view] [source] 2025-08-23 11:58:23
>>todsac+(OP)
GitHub repository with the issue tracker: https://github.com/leahneukirchen/nitro.
◧◩◪
76. rendaw+HW1[view] [source] [discussion] 2025-08-23 13:55:03
>>J_McQu+nX
I made a process supervisor, probably less simple than nitro but much more simple (and focused) than systemd.

Aside from the overreach, I think there are some legitimate issues with systemd:

- It's really hard to make services reliable. There are all sorts of events in systemd which will cause something to turn off and then just stay off.

- It doesn't really help that the things you tell it to do (start/stop this service) use the same memory bits as when some dependency turns something on.

- All the commands have custom, nonstandard outputs, mostly for human consumption. This makes it really hard to interface with (reliably) if you need to write tooling around systemd. Ini files are not standardized, especially systemd's.

- The two way (requires, requiredby) dependencies make the control graph really hard to get a big picture of

FWIW here's mine, where I wrote a bit more about the issues: https://github.com/andrewbaxter/puteron/

◧◩◪
91. JdeBP+GB4[view] [source] [discussion] 2025-08-24 17:56:36
>>imiric+Nm
These aren't runit doco, but they should help with the concepts.

* https://jdebp.uk/FGA/daemontools-family.html#Logging

* https://jdebp.uk/Softwares/nosh/guide/logging.html

◧◩◪◨
94. bityar+638[view] [source] [discussion] 2025-08-25 21:50:04
>>RulerO+xa1
If all you need is init (and not a process supervisor), docker comes with one called 'tini' built in. All you have to do is supply `--init` to the `docker run` command. Or use `init: true` in your docker-compose.yaml.

As far as a different process supervisor, I'm not sure. I've used supervisord and agree it's kind of awkward. I have heard of these but don't know much about them:

https://smarden.org/runit/

https://github.com/nicolas-van/multirun

https://github.com/just-containers/s6-overlay

[go to top]