My reasoning is 1) AIs can comprehend specs easily, especially if simple, 2) it is only valuable to "meet developers where they are" if really needing the developers' history/experience which I'd argue LLMs don't need as much (or only need because lang is so flexible/loose), and 3) human languages were developed to provide extreme human subjectivity which is way too much wiggle-room/flexibility (and is why people have to keep writing projects like these to reduce it).
We should be writing languages that are super-strict by default (e.g. down to the literal ordering/alphabetizing of constructs, exact spacing expectations) and only having opt-in loose modes for humans and tooling to format. I admit I am toying w/ such a lang myself, but in general we can ask more of AI code generations than we can of ourselves.
Perhaps if the interpreter is in turn embedded in the executable and runs in-process, but even a do-nothing `uv` invocation takes ~10ms on my system.
I like the idea of a minimal implementation like this, though. I hadn't even considered it from an AI sandboxing perspective; I just liked the idea of a stdlib-less alternative upon which better-thought-out "core" libraries could be stacked, with less disk footprint.
Have to say I didn't expect it to come out of Pydantic.
It doesn't have class support yet!
But it doesn't matter, because LLMs that try to use a class will get an error message and rewrite their code to not use classes instead.
Notes on how I got the WASM build working here: https://simonwillison.net/2026/Feb/6/pydantic-monty/
And now for something, completely different.
Everyone was using git for reasons to me that seemed bandwagon-y, when Mercurial just had such a better UX and mental model to me.
Now, everyone is writing agent `exec`s in Python, when I think TypeScript/JS is far better suited for the job (it was always fast + secure, not to mention more reliable and information dense b/c of typing).
But I think I'm gonna lose this one too.
Why would one drag this god forsaken abomination on server-side is beyond me.
Even effing C# nowdays can be run in script-like manner from a single file.
—
Even the latest Codex UI app is Electron. The one that is supposed to write itself with AI wonders but couldn’t manage native swiftui, winui, and qt or whatever is on linux this days.
LLMs are really good at writing python for data processing. I would suspect its due to Python having a really good ecosystem around this niche
And the type safety/security issues can hopefully be mitigated by ty and pyodide (already used by cf’s python workers)
Typescript’s types are far more adaptable and malleable, even with the latest C# 15 which is belatedly adding Sum Types. If I set TypeScript to its most strict settings, I can even make it mimic a poor man’s Haskell and write existential types or monoids.
And JS/TS have by far the best libraries and utilities for JSON and xml parsing and string manipulation this side of Perl (the difference being that the TypeScript version is actually readable), and maybe Nushell but I’ve never used Nushell in production.
Recently I wrote a Linux CLI tool for managing podman/quadlett containers and I wrote it in TypeScript and it was a joy to use. The Effect library gave me proper Error types and immutable data types and the Bun Shell makes writing shell commands in TS nearly as easy as Bash. And I got it to compile a single self contained binary which I can run on any server and has lower memory footprint and faster startup time than any equivalent .NET code I’ve ever written.
And yes had I written it in rust it would have been faster and probably even safer but for a quick a dirty tool, development speed matters and I can tell you that I really appreciated not having to think about ownership and fighting the borrow checker the whole time.
TypeScript might not be perfect, but it is a surprisingly good language for many domains and is still undervalued IMO given what it provides.
Yep still using good old hg for personal repos - interop for outside project defaults to git since almost all the hg host withered.
Of course it's slow for complex numerical calculations, but that's the primary usecase.
I think the consensus is that LLMs are very good at writing python and ts/js, generally not quite as good at writing other languages, at least in one shot. So there's an advantage to using python/js/ts.
But I'd be interested to see what you come up with.
https://en.wikipedia.org/wiki/List_of_Python_software#Python...
For example, incorrect levels of indentation. Let me use dots instead of space because of HN formatting:
for key,val in mydict.items():
..if key == "operation":
....logging.info("Executing operation %s",val)
..if val == "drop_table":
....self.drop_table()
This uses good syntax, and I the logging part is not in the stdlib, so I assume it would ignore it or replace it with dummy code? That shouldn't prevent it from analyzing that loop and determining that the second if-block was intended to be under the first, and the way it is written now, the key check isn't done.
In other words, if you don't want to do validate proper stdlib/module usage, but proper __Python__ usage, this makes sense. Although I'm speculating on exactly what they're trying to do.
EDIT: I think I my speculation was wrong, it looks like they might have developed this to write code for pydantic-ai: https://github.com/pydantic/pydantic-ai , i'll leave the comment above as-is though, since I think it would still be cool to have that capability in pydantic.
I’m especially curious about where the Pydantic team wants to take Monty. The minimal-interpreter approach feels like a good starting point for AI workloads, but the long tail of Python semantics is brutal. There is a trade-off between keeping the surface area small (for security and predictability) and providing sufficient language capabilities to handle non-trivial snippets that LLMs generate to do complex tasks
Just beware of panics!
The idea is that in “traditional” LLM tool calling, the entire (MCP) tool result is sent back to the LLM, even if it just needs a few fields, or is going to pass the return value into another tool without needing to see the intermediate value. Every step that depends on results from an earlier step also requires a new LLM turn, limiting parallelism and adding a lot of overhead.
With code mode, the LLM can chain tool calls, pull out specific fields, and run entire algorithms using tools with only the necessary parts of the result (or errors) going back to the LLM.
These posts by Cloudflare: https://blog.cloudflare.com/code-mode/ and Anthropic: https://platform.claude.com/docs/en/agents-and-tools/tool-us... explain the concept and its advantages in more detail.
disclaimer: i work at E2B, opinions my own
Monty’s overhead is so low that, assuming we get the security / capabilities tradeoff right (Samuel can comment on this more), you could always have it enabled on your agents with basically no downsides, which can’t be said for many other code execution sandboxes which are often over-kill for the code mode use case anyway.
For those not familiar with the concept, the idea is that in “traditional” LLM tool calling, the entire (MCP) tool result is sent back to the LLM, even if it just needs a few fields, or is going to pass the return value into another tool without needing to see (all of) the intermediate value. Every step that depends on results from an earlier step requires a new LLM turn, limiting parallelism and adding a lot of overhead, expensive token usage, and context window bloat.
With code mode, the LLM can chain tool calls, pull out specific fields, and run entire algorithms using tools with only the necessary parts of the result (or errors) going back to the LLM.
These posts by Cloudflare: https://blog.cloudflare.com/code-mode/ and Anthropic: https://platform.claude.com/docs/en/agents-and-tools/tool-us... explain the concept and its advantages in more detail.
I think in the near term we'll add support for classes, dataclasses, datetime, json. I think that should be enough for many use cases.
everything that you don’t want your agent to access should live outside of the sandbox.
But to be clear, we're not even targeting the same "computer use" use case I think e2b, daytona, cloudflare, modal, fly.io, deno, google, aws are going after - we're aiming to support programmatic tool calling with minimal latency and complexity - it's a fundamentally different offering.
Chill, e2b has its use case, at least for now.
(Genuine question, I've been trying to find reliable, well documented, robust patterns for doing this for years! I need it across macOS and Linux and ideally Windows too. Preferably without having to run anything as root.)
In hindsight, it's pretty funny and obvious
although you’d still need another boundary to run your app in to prevent breaking out to other tenants.
Will explore this for https://toolkami.com/, which allows plug and play advanced “code mode” for AI agents.
Yes, I was also thinking.. y MCP den
But even my simple class project reveals this. You actually do want a simple tool wrapper layer (abstraction) over every API. It doesn't even need to be an API. It can be a calculator that doesn't reach out anywhere.
as the article puts it: "MCP makes tools uniform"
I do like Typescript (not JS) better, because of its highly advanced type system, compared to Python's.
TS/JS is not inherently fast, it just has a good JIT compiler; Python still ships without one. Regarding security, each interpreter is about as permissive as the other, and both can be sealed off from environment pretty securely.
https://github.com/microsoft/litebox might somehow allow it too if a tool can be built on top of it, but there is no documentation.
Or is all Rust code secure unquestionably?
While I think all LLMs are shit, they probably eventually will not be shit, and it will because people like you contributed to their progress. Nothing good will come of it for you or your peers. The Billionaires who own everything will kick you out to the curb as soon as you train your replacement that doesn't sleep, eat or complain. Have some class solidarity.
I trust Firecracker more because it was built by AWS specifically to sandbox Lambdas, but it doesn't work on macOS and is pretty fiddly to run on Linux.
https://danwalsh.livejournal.com/28545.html
One might have different profiles with different permissions. A network service usually wouldn't need your hone directory while a personal utility might not need networking.
Also, that concept could be mixed with subprocess-style sandboxing. The two processes, main and sandboxed, might have different policies. The sandboxed one can only talk to main process over a specific channel. Nothing else. People usually also meter their CPU, RAM, etc.
INTEGRITY RTOS had language-specific runtimes, esp Ada and Java, that ran directly on the microkernel. A POSIX app or Linux VM could run side by side with it. Then, some middleware for inter-process communication let them talk to each other.
Tokenization joke?
Pretty much all morn software tooling, removing the parts that aim at appeal to humans, becomes much more reliable tools. But it's not clear if the performance will be better or not.
The invention of the digital calculator turned human calculators into accountants, and that's great! We're contributing to the same process now
I don't know how to prevent people from stopping this without shaming them. I think more shaming might be required, as uncomfortable as that may be. It's a societal wide prisoner's dilemma (well if I don't build it, someone else will), except we this isn't a prisoners dilemma and we can coordinate, sort of.
It would be one thing if GPUs and Tokens were cheap and everyone could take these implementations and out compete the corporations, but that's not the game theoretical terms we're on here. They have the resources, and I promise they are not going to let the average joe be able afford to out compete them. They are the ones that are going to be able to get the most advantage from these tools.. Why give them the extra leverage. It will be used to displace you. The ruling class or those with the resources, have zero intention of letting the tide rise all boats. And if there are any in the ruling class that do have good intentions, they will be rooted out.
We see this evidence all across literature, history, and in their own actions. This year in Telluride Colorado the Ski Patrol Union went on strike over wages. The billionaire owner who lives in California, Chuck Horning, did not want to concede to the Ski Patrolers over a $66k spread out over 3 years, like 22k a year over the contract length. He shutdown the ski resort during the Christmas holidays, and brought the town to its knees. This is just one example, but there are many. It is ideological to these people, its about maintaining their control over the working class. We are at the beginning of a class struggle that Earth has never witnessed before, with way more lives at stake.
I do not think LLMs are going to lead to super intelligence btw, I do believe it will get decent enough to uproot many lives when its used as a weapon against the value of labor and to accelerate concentration of resources into the few(er). We are up against people like Chuck Horner, who'd rather destroy an entire town of workers over 22k a year than concede any power. They have zero interest in building a equitable society, or we wouldn't see this type of behavior. This will 100% get used to replace you, then what will they do with us? They aren't going to just let everyone chill, I promise you that.
I believe the devaluation (and surveillance )of labor because of LLMs, robotics (machine learning in general) is the most pressing issue of our time.
I get the draw to building cool tools with these things, but please don't do it in the open. Let someone else do it, and then we can call them out too. The slower these developments can happen the better.
Open source has been responsible for enormous productivity boosts in our industry, because we don't all have to build duplicates of exactly the same thing time and time again.
But think of all of the jobs that were lost by people who would otherwise been employed building the 500th version of a CSS design system, or a template engine, or code to handle website logins!
What makes AI tools different? (And I actually do agree that they feel different, but I'm interested in hearing arguments stronger than "it feels different".)
Corporations and billionaires will get Ti-Nspires we get Ti-83s.
I do not agree that inference will get more affordable in time to prevent harm. It will cause way more problems with the devaluation of labor before it starts to solve those problems, and in that period they will solidify their control over society.
We already see it in how ML is being used on a vast scale to build advanced surveillance infrastructure. Lets not build the advanced calculators for them for free in open source please, they'd like nothing better. I wrote a lot more in the comments above also.
If anyone has time, this is required reading imho: https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
To put it gently, yes it feels different: for people who haven't already saved a lifetime of SWE wages, this is the first credible threat to the sector in which they're employed since the dot com bubble. People need to work to eat.
And Python VM had/has its sandboxing features too, previously rexec and still https://github.com/zopefoundation/RestrictedPython - in the same category I'd argue.
Then there's of course hypervisor based virtualization and the vulnerabilities and VM escapes there.
Browsers use belt-and-suspenders approaches of employing both language runtime VMs and hardware memory protection as layers to some effect, but still are the star act at pwn2own etc.
It's all layers of porous defenses. There'd definitely be room in the world for performant dynamic language implementations with provably secure foundations.
I think skills and other things have shown that a good bit of learning can be done on-demand, assuming good programming fundamentals and no surprise behavior. But agreed, having a large corpus at training time is important.
I have seen, given a solid lang spec to a never-before-seen lang, modern models can do a great job of writing code in it. I've done no research on ability to leverage large stdlib/ecosystem this way though.
> But I'd be interested to see what you come up with.
Under active dev at https://github.com/cretz/duralade, super POC level atm (work continues in a branch)
You cannot compare any open source software, even as a whole, to the impact that LLMs have had on labor and are projected too. However, I might now argue it would have been better to not have so much open source, as its clearly being processed through these plagiarism laundering training regimes.
I don't really think LLMs, robotics and ML in general are going to increase GDP globally, they will instead just replace the inputs that were maintain the status quo (the workers). If they can't successfully replace human labor, it will at minimum greatly reduce its value, which is extremely dangerous.
Jobs grew greatly during the last 30 years of open source development but over the last 16 months we've had 350-400k SWE layoffs in the last 16 months in the USA. Many of these layoffs have been directly correlated to AI enhanced productivity. 25% of recent college graduates are unemployed. Jobs data is super unreliable at the moment, but we also will see large swaths of the lower skilled sectors, customer service for example, see huge layoffs in the coming 24 months.
Despite what C-Suites say about AI giving them more free time for their hobbies or whatever, they've yet to answer how people are going to afford those hobbies. Working as a barista lol? These same mouthpieces will say that llms are going to allow the same amount of engineers to get 10x more done, but they're not reflecting that in their business decisions. They are laying people off in swaths when equities are at all time highs, its abnormal.
I think its more likely the ruling classes will give us something to do by making us so poor that young men will beg to go fight wars. Put us to use on behalf of their conquest for more resources, that certainly did the trick in the 20s, 30s and 40s :/
Their security model, as explained in the README, is in not including the standard library and limiting all access to the environment to functions you write & control. Does that make it secure? I'll leave it to you to evaluate that in the context of your use case/threat model.
It would appear to me that they used Rust primarily because a.) they want to deliver very fast startup times and b.) they want it to be accessible from a variety of host languages (like Python and JavaScript). Those are things Rust does well, though not to the exclusion of C or other GC-free compiled languages. They certainly do not claim that Rust is pixie dust you sprinkle on a project to make it secure. That would clearly be cargo culting.
I find this language war tiring. Don't you? Let's make 2026 the year we all agree to build cool stuff in whatever language we want without this pointless quarreling. (I've personally been saying this for three years at this point.)
These inequalities already exist
Think of it as a language for their use case with Python's syntax and not a Python implementation. I don't know if it's a good idea or not, I'm just an intrigued onlooker, but I think lifting a familiar syntax is a legitimate strategy for writing DSLs.
If the agent can only use the Python interpreter you choose then you could just sandbox regular Python, assuming you trust the agent. But I don't trust any of them because they've probably been vibe coded, so I'll continue to just sandbox the agent using bubblewrap.
You guys and astral are my favorite groups in the python ecosystem
1. Large built-in standard library (CSV, sqlite3, xml/json, zipfile).
2. In Python, whatever the LLM is likely to do will probably work. In JS, you have the Node / Deno split, far too many libraries that do the same thing (XMLHTTPRequest / Axios / fetch), many mutually-incompatible import syntaxes (E.G. compare tsx versus Node's native ts execution), and features like top-level await (very important for small scripts, and something that an LLM is likely to use!), which only work if you pray three times on the day of the full moon.
3. Much better ecosystem for data processing (particularly csv/pandas), partially resulting from operator overloading being a thing.
Such changes take time, and I favor an "evolution trumps revolution"-approach for such features. The JS/TS ecosystem has the advantage here, as it has already been going through its roughest time since es2015. In hindsight, it was a very healthy choice and the type system with TS is something to be left desired in many programming languages.
If it weren't for its rich standard library and uv, I would still clearly favor TS and a runtime like bun or deno. Python still suffers from spread out global state and some multi-paradigm approach when it comes to concurrency (if concurrency has even been considered by the library author). Python being the first programming language for many scientists shows its toll too: rich libraries of dubious quality in various domains. Whereas JS' origins in browser scripting contributed to the convention to treat global state as something to be frowned upon.
I wish both systems would have good object schema validation build into the standard library. Python has the upper hand here with dataclasses, but it still follows some "take it or throw"-approach, rather than to support customization for validations.
You do? Deno is maybe a single digit percentage of the market, just hyped tremendously.
> E.G. compare tsx versus Node's native ts execution
JSX/TSX, despite what React people might want you to believe, are not part of the language.
> which only work if you pray three times on the day of the full moon.
It only doesn't work in some contexts due to legacy reasons. Otherwise it's just elaborate syntax sugar for `Promise`.
Claude Code always resorts to running small python scripts to test ideas when it gets stuck.
Something like this would mean I dont need to approve every single experiment it performs.
I figured that that was because they want tighter integration and a safer execution environment for code written by the LLM. And sandboxing is already very common for JavaScript in browsers.
Is this mostly just for codemode where the MCP calls instead go through a Monty function call? Is it to do some quick maths or pre/post-processing to answer queries? Or maybe to implement CaMeL?
It feels like the power of terminal agents is partly because they can access the network/filesystem, and so sandboxed containers are a natural extension?
Is this mostly just for codemode where the MCP calls instead go through a Monty function call? Is it to do some quick maths or pre/post-processing to answer queries? Or maybe to implement CaMeL?
It feels like the power of terminal agents is partly because they can access the network/filesystem, and so sandboxed containers are a natural extension?
It's about the role of technologies in evolution, responsibility versus utilitarian take, etc. It should be developed and discussed seriously, but not in a buried sub-thread.
IMHO It’s irrelevant it has a slightly better typesystem and runtime but that’s totally irrelevant nowadays.
With AI doing mostly everything we should forget these past riddles. Now we all should be looking towards fail-safe systems, formal verification and domain modeling.
Also known as the "swiss cheese model" in risk management.
It will have access to the original runtimes and ecosystems and it can’t be tampered, it’s well tested, no amount of forks and tricky indirections to bypass syscalls.
Such runtimes come with a bill of technical debt, no support, specific documentation and lack of support for ecosystem and features. And let’s hope in two years isn’t abandoned.
Same could be applied for docker or nix Linux, or isolated containers, etc… the level of security should be good enough for LLMs, not even secure against human (specialist hackers) directed threads
Similarly: TypeScript, despite what Node people might want you to believe, is not part of the JavaScript language.
I think you misunderstood this. tsx in this context is/was a way to run typescript files locally without doing tsc yourself first, ie make them run like a script. You can just use Node now, but for a long time it couldn’t natively run typescript files.
The only limitation I run into using Node natively is you need to do import types as type imports, which I doubt would be an issue in practice for agents.
> Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code.
> Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.
And on GPU side, the existing libraries provide DSL based JITs, thus for many scenarios the performance is not much different from C++.
Now NVidia is also on the game with the new tile based architecture, with first party support to write kernels in Python even.
In fact, the team has back pedaled into trying to make its own thing like in the early days.
> Now we all should be looking towards fail-safe systems, formal verification and domain modeling.
We were looking forward to these things since the term distributed computing has been coined, haven't we? Building fail-safe systems has always been the goal since long-running processes were a thing.
Despite any "past riddles", the more expressive the type system the better the domain modeling experience, and I'd guess formal methods would benefit immensely from a good type system. Is there any formal language that is usable as general-purpose programming language I don't know of? I only ever see formal methods used for the verification of distributed algorithms or permission logic, on the theorem proving side of things, but I have yet to see a single application written only in something like Lean[0] or LiquidHaskell[1]...
Only if the training data has enough Python code that doesn't use classes.
(We're in luck that these things are trained on Stackoverflow code snippets.)
I also want my models to be able to write typescript, python, c# etc, or any language and run it.
Having the model have access to a completely minimal version of python just seems like a waste of time.
My models are writing code all day in 3/4 different languages, why would I want to:
a) Restrict them to Python
b) Restrict them to a cutdown, less-useful version of Python?
My models write me Typescript and C# and Python all day with zero issues. Why do I need this?
I've always used ts-node, so I forgot about tsx's existence, but still those are just tools used for convenience.
Nothing currently actually runs TypeScript natively and the blessed way was always to compile it to JS and run that.
This is true in a sense, but every little papercut at the lower levels of abstraction degrades performance at higher levels as the LLM needs to spend its efforts on hacking around jank in the Python interpreter instead of solving the real problem.
I wouldn't call it running TS natively - what they're doing is either using an external tool, or just stripping types, so several things, like most notably enums, don't work by default.
I mean, that's more than enough for my use cases and I'm happy that the feature exists, but I don't think we'll ever see a native TypeScript engine. Would have been cool, though, considering JS engines define their own internal types anyway.
Really tired of every AI-related tool released as of late being a half-GB node behemoth with hundreds of library dependencies.
Or alternatively some cryptic academic Rust codebase.
Any human or AI want to take the challenge?
Do you not realize how this sounds?
>many mutually-incompatible import syntaxes
Do you think there are 22 competing package managers in python because the package/import system "just works"?
that said, the class restriction feels weird. classes aren't the security boundary. file access, network, imports - that's where the risk is. restricting classes just forces the model to write uglier code for no security gain. would be curious if the restrictions map to an actual threat model or if it's more of a "start minimal and add features" approach.
My current security model is to give it a separate Linux user.
So it can blow itself up and... I think that's about it?
I'm an optimist on this and I remain hopeful that AI will create more and better jobs, but I'm not at all certain about that. It's possible it will play out the way you describe, and that will suck.
I'm not ready to blame the 100,000s of software layoffs on AI though - I think the more likely explanation for those is over-hiring during Covid combined with the end of ZIRP.
The security angle is probably the most compelling part. Running arbitrary AI-generated Python in a full CPython runtime is asking for trouble — the attack surface is enormous. Stripping it down to a minimal subset at least constrains what the generated code can do.
The bet here seems to be that AI-generated code can be nudged to use a restricted subset through error feedback loops, which honestly seems reasonable for most tool-use scenarios. You don't need metaclasses and dynamic imports to parse JSON or make API calls.
The second use case is for HUMAN to learn from human. Your open source projects are excellent examples, same with Django and Python open source ecosystem.
I just hope humans will not stop learning. As long as you share your passion of learning, people will learn from you. It has nothing to do automation.
There is a ton of wheel reinvention going on right now cause everyone wants to be cool in the age of ai
Use boring tech, you'll thank me and yourself later
Which in this case means, just use regular python. Your devops team is unlikely to allow knock off python in production. TS is fine too, I mainly write Go
It's not industrial-grade safety for public use, but it'll do for personal use. Other tools for it are also mentioned.
Perhaps you're using v8 isolates, which then you're back into the "heavily restricted environment within the process" and you lose the things you'd want your AI to be able to do, and even then you still have to sandbox the hell out of it to be safe and you have to seriously consider side channel leaks.
And even after all of that you'd better hope you're staying up to date with patches.
MicroVMs are going to just be way simpler IMO. I don't really get the appeal of using V8 for this unless you have platform/ deployment limitations. Talking over Firecracker's vsock is extremely fast. Firecracker is also insanely safe - 3 CVEs ever, and IMO none are exploitable.
There aren't; a large fraction of tools people mention in this context aren't actually package managers and don't try to be package managers. Sometimes people even conflate standards and config files with tools. It's really amazing how much FUD there is around it.
But more importantly, there is no such thing as "the package/import system". Packaging is one thing, and the language's import system is a completely different thing.
And none of that actually bears on the LLM's ability to choose libraries and figure out language syntax and APIs. For that matter, you don't have to let it set up the environment (or change your existing setup) if you don't want to.
You don't have to give it bash, depending on your tools at least.
> So it can blow itself up and... I think that's about it?
And exfiltrate data via the Internet, fill up disk space...
Thank you!!
Reminds of evolutionary debate. Whats important is just because something can learn to adapt doesnt mean theyll find an optimized adaption, nor will they continually refine it.
As far as i can tell AI will only solve problems well where the problem space is properly defined. Most people wont know how to do that.
But most real world code needs to use (standard/3rd party) library, no? Or is this for AI's own feedback loop?
It's something, I think, missing from smolagents ecosystem anyway!
When you put it like that I can see why people end up with electron!
How I finally was able to make a large Rust project without having to sacrifice my free time to really fully understand Rust. I have read through the Rust book several times but I never have time to fully “practice” Rust, I was able to say screw it and built my own Rust software using Claude Code.
The Man Who Listens to Horses (1997) is an excellent book by Monty Roberts about learning the language of horses and observing and listening to animals: https://www.biblio.com/search.php?stage=1&title=The+Man+Who+...
Video demonstration of the above: https://www.youtube.com/watch?v=vYtTz9GtAT4
Gleaned from https://github.com/containers/bubblewrap/blob/0c408e156b12dd... and https://github.com/containers/bubblewrap/tree/0c408e156b12dd...