LCF-style provers have a much smaller trusted computing base than Curry/Howard based provers like Coq, Agda and Lean.
One may wonder if there is a correlation between size of TCB and error rate in widely used provers?
I'm not sure that this is correct. The TCB in a CH based prover is just the implementation of the actual kernel. In LCF, you also have to trust that any tactics are implemented in a programming language that doesn't allow you to perform unsound operations. That's a vast expansion in your TCB. (You can implement LCF-like "tactics" in a CH-based prover via so-called reflection that delegates the proof to a runtime computation, but you do have to prove that your computation yields a correct decision for the problem.)
Tactics generate thms by calling functions of the Thm module in some fashion.
○ https://github.com/jrh13/hol-light/blob/master/fusion.ml
A mere 676 LoCs. This miniscule size of the TCB is where the comparatively lesser bug count (despite intense use) comes from.
There is no free lunch.
Of course there's no free lunch, as you say, in making sure that high-level proofs are lowered into the trusted part correctly, but it's certainly a piece that should ideally be as simple as possible.
[0] https://lists.cam.ac.uk/sympa/arc/cl-isabelle-users/2025-02/...
[1] https://lists.cam.ac.uk/sympa/arc/cl-isabelle-users/2025-02/...