zlacker

[parent] [thread] 8 comments
1. IshKeb+(OP)[view] [source] 2023-11-27 20:50:18
Meaningless question. If you allow arbitrary number formats you can just define 1 to be an arbitrarily large number.
replies(2): >>happyt+H1 >>tromp+h2
2. happyt+H1[view] [source] 2023-11-27 20:57:43
>>IshKeb+(OP)
This is pedantic. Even mathematicians - famous pedants - embrace the subjective concept of "non-trivial" answers.
replies(1): >>roboca+I5
3. tromp+h2[view] [source] 2023-11-27 20:59:53
>>IshKeb+(OP)
But the lambda calculus is far from an arbitrary format. It's about the oldest and least arbitrary formalism for computation.
replies(1): >>Legion+em
◧◩
4. roboca+I5[view] [source] [discussion] 2023-11-27 21:15:51
>>happyt+H1
To be pedantic, calling it pedantic is pedantic.

If the number representation is encoded outside the 64 bits then you have removed the 64 bit restriction. Of course it is hard to calculate how many bits of information are required to define the type. But "uint64" is pretty small and just requires our current human context (quite a few more bits of information) to make sense!

replies(1): >>tshadd+Lj
◧◩◪
5. tshadd+Lj[view] [source] [discussion] 2023-11-27 22:23:45
>>roboca+I5
> If the number representation is encoded outside the 64 bits then you have removed the 64 bit restriction.

The explanation for how to interpret the 64 bit string is always outside of the 64 bit string. It's going to be tough to compare how big two explanations are since that's going to depend a lot on what each person considers to be a sufficient explanation.

I'm personally quite rusty on lambda calculus, and from glancing at the author's papers about lambda calculus encodings I suspect it will take a rather large number of bytes to represent an explanation that will make it through my thick skull!

◧◩
6. Legion+em[view] [source] [discussion] 2023-11-27 22:36:07
>>tromp+h2
Perhaps the lambda calculus proper is the oldest, but the choices made in encoding the binary lambda calculus are somewhat arbitrary: for instance, why use de Bruijn indices for symbols, instead of some other scheme, such as the index of the lambda in the string? And why encode de Bruijn indices in unary, and not in any other 1-prefixed self-delimiting format? And why use a self-delimiting format in the first place? Obviously, these choices are quite reasonable and make for a simple implementation, but they're totally distinct from the formalism that Church first described. (And for that matter, Church originally only used lambda notation in the context of an overarching logical formalism that later got struck down.)

Indeed, if we look at Church's paper, we'd find that the term is 80 symbols written fully, {λt[{{{{{t}(t)}(λhfn[{{{n}(h)}(f)}(n)]]])}(t)}(t)}(t)]}(λfx[{f}({f}(x))]]), or 44 symbols when abbreviated, {λt t(t, (λh λf λn n(h, f, n)), t, t, t)}(λf λx f(f(x))), which aren't too impressive given the alphabet of 11 or 12 symbols.

replies(2): >>tromp+ks >>chrisw+Cy
◧◩◪
7. tromp+ks[view] [source] [discussion] 2023-11-27 23:06:09
>>Legion+em
> but the choices made in encoding the binary lambda calculus are somewhat arbitrary

I made the simplest choices I could that do not waste bits.

> And why use a self-delimiting format in the first place?

Because a lambda term description has many different parts that you need to be able to separate from each other.

> And why encode de Bruijn indices in unary

I tried to answer that in more detail in this previous discussion [1].

[1] >>37584869

◧◩◪
8. chrisw+Cy[view] [source] [discussion] 2023-11-27 23:44:55
>>Legion+em
> why use de Bruijn indices for symbols, instead of some other scheme, such as the index of the lambda in the string?

It's pretty natural to use de Bruijn indexes, since their meaning is local and their composition matches the language semantics (they have a denotational semantics, which gives expressions an intrinsic meaning). For example, with de Bruijn indices the expression `λ0` is always the identity function. If we tried a different scheme, like "the index of the lambda in the string", then expressions like `λ0` would have no intrinsic meaning; it depends on what "string" they occur in, which requires some concrete notion of "the string", which complicates any interpreter, etc.

Whenever I see an implementation of lambda calculus (or some derivative) which doesn't use de Bruijn indexes, I require some justification to convince me why not :) (There are such justifications, e.g. performance optimisations, use of a host language's naming facilities, etc.; but I've can't think of any reason a self-contained, minimalist approach should avoid them)

> why use a self-delimiting format in the first place?

There are various technical reasons to prefer self-delimiting languages when working in algorithmic information theory. For example, prefix-free (binary) encodings are paths through a (binary) tree, with programs as leaves. This lets us assign well-defined probability measures to the set of all programs (assign 1 to the root node, and divide each node's probability between its (two) children). It's also more natural when programs are executed before they're fully known; e.g. reading one bit at a time on-demand (say, generated by a coin toss)

replies(1): >>Legion+xV
◧◩◪◨
9. Legion+xV[view] [source] [discussion] 2023-11-28 02:40:58
>>chrisw+Cy
Yeah, I know that de Bruijn indices and self-delimiting formats make a lot of sense, and are probably among the most natural options. My questions were mostly rhetorical; my main point is to challenge the idea that BLC and its transition function in particular should be considered as a fundamental measure of complexity, since it clearly incorporates a fair few design decisions external to the lambda calculus itself.
[go to top]