zlacker

[return to "Monty: A minimal, secure Python interpreter written in Rust for use by AI"]
1. simonw+fj[view] [source] 2026-02-06 23:13:00
>>dmpetr+(OP)
I got a WebAssembly build of this working and fired up a web playground for trying it out: https://simonw.github.io/research/monty-wasm-pyodide/demo.ht...

It doesn't have class support yet!

But it doesn't matter, because LLMs that try to use a class will get an error message and rewrite their code to not use classes instead.

Notes on how I got the WASM build working here: https://simonwillison.net/2026/Feb/6/pydantic-monty/

◧◩
2. vghais+i71[view] [source] 2026-02-07 09:34:16
>>simonw+fj
This is very cool, but I'm having some trouble understanding the use cases.

Is this mostly just for codemode where the MCP calls instead go through a Monty function call? Is it to do some quick maths or pre/post-processing to answer queries? Or maybe to implement CaMeL?

It feels like the power of terminal agents is partly because they can access the network/filesystem, and so sandboxed containers are a natural extension?

◧◩◪
3. 16bitv+Q91[view] [source] 2026-02-07 10:17:39
>>vghais+i71
It's right there in the README.

> Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code.

> Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.

◧◩◪◨
4. vghais+lC1[view] [source] 2026-02-07 15:11:17
>>16bitv+Q91
Oh I did read the README, but still have the question: while it does save on cost, latency and complexity, the tradeoff is that the agents can't run whatever they want in a sandbox, which would make them less capable too.
[go to top]