zlacker

[parent] [thread] 8 comments
1. tromp+(OP)[view] [source] 2023-09-20 07:12:19
> The appendix is a great segue to Church's Lambda Calculus, which will probably be my next project.

Note that a straightforward universal machine for the lambda calculus can be orders of magnitude smaller than for Turing machines [1].

[1] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...

replies(2): >>jekude+HC >>danbru+EV
2. jekude+HC[view] [source] 2023-09-20 13:12:01
>>tromp+(OP)
Hey John, huge fan, and thank you for the link! I've been struggling with which paper to use to implement the Lambda Calculus (I prefer original source material, because I feel that I learn a little more that way). I started with "An Unsolvable Problem of Elementary Number Theory" [1], but have now temporarily settled on Church's book "The Calculi of Lambda-Conversion" [2] which is a bit more explanatory and is less focused on the decision problem. Curious if you have a recommendation?

[1] https://www.ics.uci.edu/~lopes/teaching/inf212W12/readings/c...

[2] https://compcalc.github.io/public/church/church_calculi_1941...

replies(1): >>tromp+zU
◧◩
3. tromp+zU[view] [source] [discussion] 2023-09-20 14:42:26
>>jekude+HC
In what language do you want to implement the lambda calculus? I think that while Church's writings are great background material, they do not make the best guides for implementation.
replies(1): >>jekude+331
4. danbru+EV[view] [source] 2023-09-20 14:47:02
>>tromp+(OP)
Is the unary encoding of the indices in some way optimal or could one compress the representations further using some [variable length] binary encoding, maybe at the loss of some other desirable properties?
replies(1): >>tromp+Hm3
◧◩◪
5. jekude+331[view] [source] [discussion] 2023-09-20 15:19:54
>>tromp+zU
I have a secret goal to simulate a Turing Machine in the Lambda Calculus, and vice versa, so I was hoping to implement both in the same language so that interoperability would be easier.

I chose Go for the Turing Machines because I enjoy writing it, and planned to blindly use Go again for the Lambda Calculus for the reason above, but if you have a recommendation I'd love to hear it!

replies(1): >>tromp+se1
◧◩◪◨
6. tromp+se1[view] [source] [discussion] 2023-09-20 16:14:42
>>jekude+331
Go is similar to C in that neither supports closures (in the form of lambda expressions with untyped arguments). For my own implementation of lambda calculus in C, I chose to implement a so-called Krivine machine, which is one of the simplest abstract machines for the call-by-name lambda-calculus.

Although I never wrote a Turing Machine interpreter in the lambda calculus, I did write one for its close cousin, the Brainfuck language [2].

[1] https://www.irif.fr/~krivine/articles/lazymach.pdf

[2] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...

◧◩
7. tromp+Hm3[view] [source] [discussion] 2023-09-21 07:18:50
>>danbru+EV
A unary encoding is certainly optimal in simplicity.

For encoding size, it would be optimal if index i occurs roughly with frequency 2^-i. In many lambda terms of practical interest, one does see higher indices occurring much less frequently, so it's not terribly far from optimal. Some compression is certainly possible; within n binding lambdas, index n could be encoded as 1^n instead of 1^n 0, but again that severely complicates the interpreter itself.

replies(1): >>danbru+Pp4
◧◩◪
8. danbru+Pp4[view] [source] [discussion] 2023-09-21 14:37:20
>>tromp+Hm3
My first comment was worded stupidly, or at least I forgot to actually ask what I was really wondering.

I noticed that some of the program lengths ended up in expressions of lower and upper bounds. Also lambda terms represented with De Bruijn indices are essentially lists of numbers and a binary encoding could give exponentially shorter representation as compared to an unary encoding at the price of some overhead when dealing with the binary numbers which I thought might be a constant.

But I did admittedly not read the page too carefully and would probably need to refresh my knowledge to properly understand the details. The programs there are also mostly short, so a binary encoding would probably make them longer. And if it really mattered, than this is of course such an obvious thing to do, that it would certainly not have been overlooked.

replies(1): >>tromp+ki5
◧◩◪◨
9. tromp+ki5[view] [source] [discussion] 2023-09-21 18:15:04
>>danbru+Pp4
> I noticed that some of the program lengths ended up in expressions of lower and upper bounds.

Several of the most celebrated theorems in AIT, such as Symmetry of Information, employ programs that need to interpret other programs. So constants in these theorems significantly suffer from complicated index encodings.

A binary encoding is not as simple as you might think, as the code needs to be self-delimiting. So you first need to encode the number of bits used in the binary code. And that must also be self-delimiting. As you can see, you need to fall back to a unary encoding at some point, as that is naturally self-delimiting.

An example of an efficient binary self-delimiting code can be found at [1]. Note that the program for decoding it into a Church numeral (the very best case) is already about 60% the size of the entire self-interpreter.

Many programs used in AIT proofs use only single digit indices (the self-interpreter e.g. only uses 1-5) and these would be negatively impacted by a binary encoding.

[1] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...

[go to top]