The only output from the WASM is to draw to screen. There is no chance of a RCE, or data exfiltration.
There's very little code in the world that I wouldn't want to run in a robust sandbox. Low level OS components that manage that sandbox is about it.
What is the end game here?
It is kind of like a "fractal" attack surface, with increasing surface the "deeper" one looks into it. It is nightmarish from that perspective ...
Last I checked there were about 4-10 TTF bugs discovered and actively exploited per year. I think I heard those stats in 2018 or so. This has been a well known and very commonly exploited attack vector for at least 20 years.
Ideally, I'd like not to execute any kind of arbitrary code when doing something mundane as rendering a font. If that's not possible, then the code could be restricted to someting less than turing complete, e.g. formula evaluation (i.e. lambda calculus) without arbitrary recursion.
The problem is that even sandboxed code is unpredictable in terms of memory and runtime cost and can only be statically analyzed to a limited extent (halting problem and all).
Additionally, once it's there, people will bring in libraries, frameworks and sprawling dependency trees, which will further increase the computing cost and unpredictability of it.
... except that it can happen in non-browser contexts.
Even for browsers, it took 20+ years to arrive at a combination of ugly hacks and standard practices where developers who make no mistakes in following a million arcane rules can mostly avoid the massive day-one security problems caused by JavaScript (and its interaction with other misfeatures like cookies and various cross-site nonsense). During all of which time the "Web platform" types were beavering away giving it more access to more things.
The Worldwide Web technology stack is a pile of ill-thought-out disasters (or, for early, core architectural decisions, not-thought-out-at-all disasters), all vaguely contained with horrendous hackery. This adds to the pile.
> The only output from the WASM is to draw to screen.
Which can be used to deceive the user in all kinds of well-understood ways.
> There is no chance of a RCE, or data exfiltration.
Assuming there are no bugs in the giant mass of code that a font can now exercise.
I used to write software security standards for a living. Finding out that you could embed WASM in fonts would have created maybe two weeks of work for me, figuring out the implications and deciding what, if anything, could be done about them. Based on, I don't know, a hundred similar cases, I believe I probably would have found some practical issues. I might or might not have been able to come up with any protections that the people writing code downstream of me could (a) understand and (b) feasibly implement.
Assuming I'd found any requirements-worthy response, it probably would have meant much, much more work than that for the people who at least theoretically had to implement it, and for the people who had to check their compliance. At one company.
So somebody can make their kerning pretty in some obscure corner case.
Whether this is good or bad, I have no opinion on. It is "just" another layer of complexity and attack surface at this point. We have programmable shaders, rowhammer, speculative execution bugs, data timing side channels, kernel level BPF scripting, prompt injection and much more. Throwing WASM based font rendering into the mix is just balancing more on top of the pile. After some years in the IT security area, I think there are so many easier ways to compromise systems than these arcane approaches. Grab the data you need from a public AWS bucket or social engineer your access, far easier and cheaper.
For what it's worth, I think embedded WASM is a better idea than rolling your own eco systems for scripting capabilities.
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1248876
[2] I know, there are so many edge cases. I put this in the same do not touch bucket as time and names.
[3] https://scripts.sil.org/cms/scripts/page.php?id=cmplxrndexam...
[1] https://www.destroyallsoftware.com/talks/the-birth-and-death...
Imagine that you download a .odt/docx/pdf form with embedded font in LibreOffice in 2025. You start to type some text... And font start to saturate FPU ports (i.e. div/sqrt) in specific pattern. Meanwhile some tab in browser measures CPU load or port saturation by doing some simple action, and capture every character you typed.
In that case being able to show arbitrary other text would definitely be a hindrance because the scanning software typically looks at the data stored in the database. However I think you don't need a Turing machine to exploit this — you could have a single ligature in a well crafted font produce a full paragraph of text.
Perhaps there's an alternative vector where someone's premade font on a site that doesn't allow font uploading can be exploited to make arbitrary calculations given certain character strings. Maybe bitcoin mining, if you could find a way to phone home with the result
iirc browsers fuzz the precise timing of calls for exactly this reason already?
Having said that, the "arbitrary code" found in TrueType is not really arbitrary either - it's not supposed to be able to do anything except change the appearance of the font. From a security standpoint, there's no theoretical difference between a WAV and a TTF font - neither can hurt your machine if the loader is bug-free. Practically speaking though, a font renderer that needs to implement a sort of virtual machine is more complex, and therefore more likely to have exploitable bugs, than a WAV renderer that simply needs to swap a few bytes around and shove them at a DAC.
If this font format is successful, then given enough time, it will become legacy. People won't be as vigilant about it, and they won't understand the internals as well. This is why TIFF-based exploits became so common 20-30 years after TIFF's heyday.
Security wise, Turing completeness doesn't matter[note]. All that really matters is that the implementation of the format is complex. H264 is not Turing complete, but it is complex, and thus a frequent source of vulnerabilities. Conversely you could probably put a toy Brainfuck interpreter in ring0 and, with moderate care, be confident that no malicious Brainfuck code can take over your system.
[note] It matters a little bit if you consider it a "security" problem that you lose any guarantees of how long a file might take to load. A malicious file could infinite loop, and thus deny service. But then again, this isn't restricted to Turing complete formats - a zip bomb can also deny service this way.