zlacker

[parent] [thread] 2 comments
1. vghais+(OP)[view] [source] 2026-02-07 09:34:16
This is very cool, but I'm having some trouble understanding the use cases.

Is this mostly just for codemode where the MCP calls instead go through a Monty function call? Is it to do some quick maths or pre/post-processing to answer queries? Or maybe to implement CaMeL?

It feels like the power of terminal agents is partly because they can access the network/filesystem, and so sandboxed containers are a natural extension?

replies(1): >>16bitv+y2
2. 16bitv+y2[view] [source] 2026-02-07 10:17:39
>>vghais+(OP)
It's right there in the README.

> Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code.

> Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.

replies(1): >>vghais+3v
◧◩
3. vghais+3v[view] [source] [discussion] 2026-02-07 15:11:17
>>16bitv+y2
Oh I did read the README, but still have the question: while it does save on cost, latency and complexity, the tradeoff is that the agents can't run whatever they want in a sandbox, which would make them less capable too.
[go to top]