zlacker

[return to "Monty: A minimal, secure Python interpreter written in Rust for use by AI"]
1. the_ha+xo1[view] [source] 2026-02-07 13:18:14
>>dmpetr+(OP)
the papercut argument jstanley made is valid but there's a flip side - when you're running AI-generated code at scale, every capability you give it is also a capability that malicious prompts can exploit. the real question isn't whether restrictions slow down the model (they do), it's whether the alternative - full CPython with file I/O, network access, subprocess - is something you can safely give to code written by a language model that someone else is prompting.

that said, the class restriction feels weird. classes aren't the security boundary. file access, network, imports - that's where the risk is. restricting classes just forces the model to write uglier code for no security gain. would be curious if the restrictions map to an actual threat model or if it's more of a "start minimal and add features" approach.

◧◩
2. zahlma+rF1[view] [source] 2026-02-07 15:36:20
>>the_ha+xo1
My understanding is that "the class restriction" isn't trying to implement any kind of security boundary — they just haven't managed to implement support yet.
[go to top]