zlacker

[return to "Broken Proofs and Broken Provers"]
1. Legion+UT[view] [source] 2026-02-04 15:12:55
>>RebelP+(OP)
I'm a bit skeptical about taking the rates of reported soundness bugs between different systems and drawing conclusions about the underlying approaches. There's typically a class of bugs that users can stumble into by doing something unusual, and then another class of bugs that can only really be found by exploiting holes in the implementation. The first class depends on which features get exercised the most, the second class depends on how many eyes are on the source (and how accessible it is), and both classes heavily depend on the overall size of the userbase.

E.g., Metamath is designed to be as theoretically simple as possible, to the point that it's widely considered a toy in comparison to 'serious' proof systems: a verifier is mainly just responsible for pushing around symbols and strings. In spite of this simplicity, I was able to find soundness bugs in a couple major verifiers, simply because few people use the project to begin with, and even fewer take the time to pore over the implementations.

So I'd be hesitant to start saying that one approach is inherently more or less bug-prone than another, except to be slightly warier in general of larger or less accessible kernels.

◧◩
2. Gregar+Id1[view] [source] 2026-02-04 16:40:23
>>Legion+UT
LCF-style provers like Isabelle/HOL and HOLlight are some of the most widely used, and oldest interactive theorem provers. If they consistently show smaller error rates than other systems, that is an interesting empirical observation. To give but one recent example: Amazon recently announced a vast 260000 lines of Isabelle/HOL-checked correctness proof of their new Nitro hypervisor for AWS EC2 Graviton5 instances.

LCF-style provers have a much smaller trusted computing base than Curry/Howard based provers like Coq, Agda and Lean.

One may wonder if there is a correlation between size of TCB and error rate in widely used provers?

◧◩◪
3. zozbot+Pg1[view] [source] 2026-02-04 16:53:24
>>Gregar+Id1
> LCF-style provers have a much smaller trusted computing base than Curry/Howard based provers like Coq, Agda and Lean.

I'm not sure that this is correct. The TCB in a CH based prover is just the implementation of the actual kernel. In LCF, you also have to trust that any tactics are implemented in a programming language that doesn't allow you to perform unsound operations. That's a vast expansion in your TCB. (You can implement LCF-like "tactics" in a CH-based prover via so-called reflection that delegates the proof to a runtime computation, but you do have to prove that your computation yields a correct decision for the problem.)

◧◩◪◨
4. Dacit+hs1[view] [source] 2026-02-04 17:44:41
>>zozbot+Pg1
No. The whole point of the LCF approach is that only kernel functions can generate theorems. Usually this is done by having a Thm module with opaque thm type (so its instances can only be generated by this module) and embedding the base rules of your logical calculus as shallow functions into that Thm module. Thus, all thm objects are correct by construction (w.r.t. your calculus), and you only need to check the Thm module to verify this.

Tactics generate thms by calling functions of the Thm module in some fashion.

◧◩◪◨⬒
5. Legion+KY1[view] [source] 2026-02-04 20:09:29
>>Dacit+hs1
At least in Isabelle/ML, it seems like in practice there are also untrusted "oracles" that a proof can invoke to generate "thm" objects [0], and it's not entirely trivial to automatically ensure that only trusted sources are used in a project [1], unless I am misunderstanding the linked thread.

Of course there's no free lunch, as you say, in making sure that high-level proofs are lowered into the trusted part correctly, but it's certainly a piece that should ideally be as simple as possible.

[0] https://lists.cam.ac.uk/sympa/arc/cl-isabelle-users/2025-02/...

[1] https://lists.cam.ac.uk/sympa/arc/cl-isabelle-users/2025-02/...

[go to top]