It's a little HTTP proxy that your application can route requests through, and the proxy is what handles adding the API keys or whatnot to the request to the service, rather than your application, something like this for example:
Application -> tokenizer -> Stripe
The secrets for the third party service should in theory then be safe should there be some leak or compromise of the application since it doesn't know the actual secrets itself.
Cool idea!
await using sandbox = await Sandbox.create({
secrets: {
OPENAI_API_KEY: {
hosts: ["api.openai.com"],
value: process.env.OPENAI_API_KEY,
},
},
});
await sandbox.sh`echo $OPENAI_API_KEY`;
// DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19...
It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.Kind of like how XSS attacks can't read httpOnly cookies but they can generally still cause fetch() requests that can take actions using those cookies.
from deno_sandbox import DenoDeploy
sdk = DenoDeploy()
with sdk.sandbox.create() as sb:
# Run a shell command
process = sb.spawn("echo", args=["Hello from the sandbox!"])
process.wait()
# Write and read files
sb.fs.write_text_file("/tmp/example.txt", "Hello, World!")
content = sb.fs.read_text_file("/tmp/example.txt")
print(content)
Looks like the API protocol itself uses websockets: https://tools.simonwillison.net/zip-wheel-explorer?package=d...Will give these a try. These are exciting times, it's never been a better time to build side projects :)
Same idea with more languages on OCI. I believe they have something even better in the works, that bundles a bunch of things you want in an "env" and lets you pass that around as a single "pointer"
I use this here, which eventually becomes the sandbox my agent operates in: https://github.com/hofstadter-io/hof/blob/_next/.veg/contain...
[0] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
A quick search this popped up:
If we can spin up microVM so quickly, why bother with Docker or other containers at all?
Those limitations from other tools was exactly why I made https://github.com/danthegoodman1/netfence for our agents
It uses web workers on a web browser. So is this Deno Sandbox like that, but for server? I think Node has worker threads.
It's a sandbox that uses envoy as a transparent proxy locally, and then an external authz server that can swap the creds.
The idea is extended further in that the goal is to allow an org to basically create their own authz system for arbitrary upstreams, and then for users to leverage macaroons to attentuate the tokens at runtime.
It isn't finished but I'm trying to make it work with ssh/yubikeys as an identity layer. The authz macaroon can have a "hole" that is filled by the user/device attestation.
The sandbox has some nice features like browser forwarding for Claude oauth and a CDP proxy for working with Chrome/Electron (I'm building an Obsidian plugin).
I'm inspired by a lot of the fly.io stuff in tokenizer and sprites. Exciting times.
Hit a snag: Sprites appear network-isolated from Fly's 6PN private mesh (fdf:: prefix inside the Sprite, not fdaa::; no .internal DNS). So a Tokenizer on a Fly Machine isn't directly reachable without public internet.
Asked on the Fly forum: https://community.fly.io/t/can-sprites-reach-internal-fly-se...
@tptacek's point upthread about controlling not just hosts but request structure is well taken - for AI agent sandboxing you'd want tight scoping on what the proxy will forward.