zlacker

[return to "Sandboxing AI Agents in Linux"]
1. aflag+iG[view] [source] 2026-02-03 20:24:10
>>speckx+(OP)
I don't know if I want to create an ad-hoc list of permissions. What I would like would be something like take a snapshot of my current workspace in a VM. Run claude there and let it go wild. After the end of the session, kill the box. The only downside is potentially syncing the claude sessions/projects. But I don't think that'd be too difficult.
◧◩
2. senko+WI[view] [source] 2026-02-03 20:37:43
>>aflag+iG
> take a snapshot of my current workspace in a VM. Run claude there

Sounds like docker + overlayfs might fit the bill, as long as there's a base image that is close enough to what you need.

I don't think there should be One True Way how to run these, everyone can set it up in a way that best fits their workflow.

◧◩◪
3. ushako+JL[view] [source] 2026-02-03 20:51:28
>>senko+WI
both Docker and bubblewrap are not secure sandboxes. the only way to have actually isolated sandboxes is by using VMs

disclaimer: i work on secure sandboxes at E2B

◧◩◪◨
4. its-su+BU[view] [source] 2026-02-03 21:36:53
>>ushako+JL
Do you have more information on how to set up such VMs?
◧◩◪◨⬒
5. ushako+DW[view] [source] 2026-02-03 21:46:37
>>its-su+BU
for personal use, many ways: Vargant, Docker Sandbox, NixOS VMs, Lima, OrbStack.

if you want multi-tenant: E2B (open-source, self-hosted)

◧◩◪◨⬒⬓
6. eikenb+Ai1[view] [source] 2026-02-03 23:45:32
>>ushako+DW
Hashicorp has mostly abandoned Vagrant, so I'd avoid it.
[go to top]