zlacker

[parent] [thread] 6 comments
1. zenmac+(OP)[view] [source] 2026-02-03 19:31:21
>Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud)

The real question is can the microVMs run in just plain old linux, self-hosted.

replies(1): >>echelo+O1
2. echelo+O1[view] [source] 2026-02-03 19:40:32
>>zenmac+(OP)
Everyone wants to lock you in.

Unfortunately there's no other way to make money. If you're 100% liberally licensed, you just get copied. AWS/GCP clone your product, offer the same offering, and they take all the money.

It sucks that there isn't a middle ground. I don't want to have to build castles in another person's sandbox. I'd trust it if they gave me the keys to do the same. I know I don't have time to do that, but I want the peace of mind.

replies(1): >>ushako+T9
◧◩
3. ushako+T9[view] [source] [discussion] 2026-02-03 20:16:14
>>echelo+O1
we have 100% open-source Sandboxes at E2B

git: https://github.com/e2b-dev/infra

wiki: https://deepwiki.com/e2b-dev/infra

replies(3): >>echelo+tl >>dizhn+XS1 >>codeth+ja6
◧◩◪
4. echelo+tl[view] [source] [discussion] 2026-02-03 21:12:46
>>ushako+T9
This is what I like to see!

Not sure what your customers look like, but I'd for one also be fine with "fair source" licenses (there are several - fair source, fair code, Defold license, etc.)

These give customers 100% control but keep Amazon, Google, and other cling-on folks like WP Engine from reselling your work. It avoids the Docker, Elasticsearch, Redis fate.

"OSI" is a submarine from big tech hyperscalers that mostly take. We should have gone full Stallman, but fair source is a push back against big tech.

replies(1): >>ushako+Oo
◧◩◪◨
5. ushako+Oo[view] [source] [discussion] 2026-02-03 21:30:01
>>echelo+tl
we aren’t worried about that.

when we were starting out we figured there was no solution that would satisfy our requirements for running untrusted code. so we had to build our own.

the reason we open-sourced this is because we want everyone to be able to run our Sandboxes - in contrast to the majority of our competitors who’s goal is to lock you in to their offering.

with open-source you have the choice, and luckily Manus, Perplexity, Nvidia choose us for their workloads.

(opinions my own)

◧◩◪
6. dizhn+XS1[view] [source] [discussion] 2026-02-04 08:52:50
>>ushako+T9
This is exactly what i am building for a friend in a semi amateur fashion with LLMs. Looking at your codebase I would probably end up with something very similar in 6 months. You even have an Air toml and use firecracker, not to mention using go. Great minds think alike I suppose :D. Mine is not for AI but for running unvetted data science scripts. Simple stuff mostly. I am using rootless podman (I think you are using docker? or perhaps packer which is a tool i didn't know about until now.) to create the microvm images and the images have no network access. We're creating a .ext4 disk image to bring in the data/script.

I think I might just "take" this if the resource requirements are not too demanding. Thanks for sharing. Do you have docs for deploying on bare metal?

◧◩◪
7. codeth+ja6[view] [source] [discussion] 2026-02-05 14:03:59
>>ushako+T9
This looks neat! How difficult would it be to run everything locally?
[go to top]