zlacker

[return to "The largest number representable in 64 bits"]
1. Legion+2c1[view] [source] 2023-04-23 23:19:07
>>tromp+(OP)
It's interesting, trying to figure out the conceptual basis for the greater power-per-bit of BLC compared to classical TMs. I wonder if it simply has to do with how β-reduction can pack so many more copy-and-pastes into a byte than TM states can, especially relative to the normal form that we use to define BB_λ. At small sizes, classical TMs are forced to do weird Collatz-like tricks, whereas BLC gets nearly-instant exponentiation. Do you have any thoughts on this?

(Also, I've been thinking of how that BB_λ conjecture might be proven. One strategy I'm thinking of for sufficiently large n would be to create a compression scheme for TMs which omits many copies of machines and trivial machines that cannot contribute to BB_TM, to get past the naive 2n(2+log₂(n+1))-bit bound. Then, we create a BLC program with a decompressor and a TM interpreter, to which a compressed TM can be appended. But the compression would have to be good enough to get around the overhead of bit sequences not being natively expressible in BLM.)

◧◩
2. tromp+V42[view] [source] 2023-04-24 08:49:16
>>Legion+2c1
Binary Lambda Calculus seems to spend its bits much more wisely than a Turing Machine encoding, working at a much higher level of abstraction. Defining and applying functions performs more useful work than moving a tape head around and jumping from state to state. The Turing Machine is also severely hampered by the pure sequential nature of the tape, where lambda calculus has access to all kinds of easily defined data structures.

> One strategy I'm thinking of for sufficiently large n would be to create a compression scheme for TMs

The universal variant https://oeis.org/A361211 of the lambda busy beaver can easily simulate any binary encoded TM with fixed overhead.

[go to top]