There's very little code in the world that I wouldn't want to run in a robust sandbox. Low level OS components that manage that sandbox is about it.
What is the end game here?
It is kind of like a "fractal" attack surface, with increasing surface the "deeper" one looks into it. It is nightmarish from that perspective ...
Ideally, I'd like not to execute any kind of arbitrary code when doing something mundane as rendering a font. If that's not possible, then the code could be restricted to someting less than turing complete, e.g. formula evaluation (i.e. lambda calculus) without arbitrary recursion.
The problem is that even sandboxed code is unpredictable in terms of memory and runtime cost and can only be statically analyzed to a limited extent (halting problem and all).
Additionally, once it's there, people will bring in libraries, frameworks and sprawling dependency trees, which will further increase the computing cost and unpredictability of it.
[1] https://www.destroyallsoftware.com/talks/the-birth-and-death...
Imagine that you download a .odt/docx/pdf form with embedded font in LibreOffice in 2025. You start to type some text... And font start to saturate FPU ports (i.e. div/sqrt) in specific pattern. Meanwhile some tab in browser measures CPU load or port saturation by doing some simple action, and capture every character you typed.
iirc browsers fuzz the precise timing of calls for exactly this reason already?