zlacker

Coding Agent VMs on NixOS with Microvm.nix

submitted by secure+(OP) on 2026-02-01 08:02:46 | 99 points 44 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
14. NJL300+vzb[view] [source] 2026-02-04 17:42:17
>>secure+(OP)
A pair of containers felt a bit cheaper than a VM:

https://github.com/5L-Labs/amp_in_a_box

I was going to add Gemini / OpenCode Kilo next.

There is some upfront cost to define what endpoints to map inside, but it definitely adds a veneer of preventing the crazy…

◧◩
20. mtlync+xOb[view] [source] [discussion] 2026-02-04 18:41:49
>>mxs_+5Kb
This is a similar macOS solution:

https://github.com/lynaghk/vibe/

◧◩◪
21. aquari+COb[view] [source] [discussion] 2026-02-04 18:42:03
>>Cyph0n+znb
To provide another data point: I too use NixOS and oh boy that one-time is really costly. And while we're sharing Nix stuff for LLMs there's this piece of kit too: https://github.com/YPares/rigup.nix
◧◩
22. alexze+MOb[view] [source] [discussion] 2026-02-04 18:42:44
>>clawsy+o2
This is a big reason for our strategy at Edera (https://edera.dev) of building hypervisor technology that eliminates the standard x86/ARM kernel overhead in favor of deep para-virtualization.

The performance of gVisor is often a big limiting factor in deployment.

◧◩◪◨
24. alexze+NQb[view] [source] [discussion] 2026-02-04 18:52:00
>>clawsy+xh1
The reason why virtualization approaches with true Linux kernels is still important is what you do allow via syscalls ultimately does result in a syscall on the host system, even if through layers of indirection. Ultimately, if you fork() in gVisor, that calls fork() on the host (btw fork() execve() is expensive on gVisor still).

The middle ground we've built is that a real Linux kernel interfaces with your application in the VM (we call it a zone), but that kernel then can make specialized and specific interface calls to the host system.

For example with NVIDIA on gVisor, the ioctl()'s are passed through directly, with NVIDIA driver vulnerabilities that can cause memory corruption, it leads directly into corruption in the host kernel. With our platform at Edera (https://edera.dev), the NVIDIA driver runs in the VM itself, so a memory corruption bug doesn't percolate to other systems.

◧◩◪
25. alexze+9Rb[view] [source] [discussion] 2026-02-04 18:53:40
>>souvik+xKb
This is the thesis of our research paper here, a good middle ground is necessary: https://arxiv.org/abs/2501.04580
◧◩◪◨⬒
35. Cyph0n+Rjc[view] [source] [discussion] 2026-02-04 21:06:41
>>0x457+OPb
It’s fairly trivial to map your NixOS config into a VM image: https://nixos.org/manual/nixos/stable/#sec-image-nixos-rebui...

An alternative is to “infect” a VM running in whatever cloud and convert it into a NixOS VM in-place: https://github.com/nix-community/nixos-anywhere

In fact, it is a common practice to use the latter to install NixOS on new machines. You start off by booting into a live USB with SSH enabled, then use nixos-anywhere to install NixOS and partition disks via disko. Here is an example I used recently to provision a new gaming desktop:

    nix run github:nix-community/nixos-anywhere -- \
      --flake .#myhost \
      --target-host user@192.168.0.100 \
      --generate-hardware-config nixos-generate-config ./hosts/myhost/hardware-configuration.nix
At the end of this invocation, you end up with a NixOS machine running your config partitioned based on your disk config. My disko config in this case (ZFS pool with 1 disk vdev): https://gist.github.com/aksiksi/7fed39f17037e9ae82c043457ed2...
◧◩◪
38. indigo+R1d[view] [source] [discussion] 2026-02-05 01:37:05
>>phroto+gOb
I like using LXC containers, eg full persistent OS and you can do docker if you want etc. I started this and it works well for me to put on a server or VPS:

https://github.com/jgbrwn/vibebin

◧◩◪
39. indigo+32d[view] [source] [discussion] 2026-02-05 01:38:49
>>dist-e+pib
I started this with same idea:

https://github.com/jgbrwn/vibebin

◧◩◪
44. schmuh+frd[view] [source] [discussion] 2026-02-05 05:37:21
>>rictic+Dib
Or try this: https://github.com/deepclause/agentvm, it's based on container2wasm, so the VM is fully defined by a Dockerfile.
[go to top]