First of all, every finite number is computable by definition.
And second, your encodings will, unlike those in the lambda calculus, be completely arbitrary.
PS: in my self-delimiting encoding of the lambda calculus, there are only 1058720968043859 < 2^50 closed lambda terms of size up to 64 [1].
To the sibling comment about arbitrariness, we could use a hybrid where we trade off some bits from IEEE FP to introduce far reaches and also some precision there.. so like, keep 32 bits or 54 bits for IEEE compatibility, then switch to "extended" ranges for e.g. BB numbers, higher alephs, etc..
There was this one system for calculation with infinities that avoided the Hilbert Hotel problem.. can't find it but was called smth like Infinioid or some other play on the name. Would be neat to bolt those on too :)
Edit: "grossone" is the calculus for infinities.. love this work! https://www.theinfinitycomputer.com/
Since some commenters pointed out how awfully spammy that website is (which I had failed to notice due to my browser's adblockers), I recently decided to slightly rewrite and expand the article to host it on my newly formed personal blog.
This sort of trick/hack is the reason why theorems in (algorithmic) information theory involve constant factors. For example, we can define an image compressor which outputs a single `1` when given the Lenna test image[0], and otherwise acts exactly like PNG except prefixing its output with a `0`. To decode: a `1` decodes to the Lenna image, and anything starting with `0` gets decoded as PNG without the leading `0`. This gives perfect compression with no loss of quality, when tested on that Lenna image ;)
[1] https://www.youtube.com/watch?v=q6Etl4oGL4U&list=PL-R4p-BRL8...
I made the simplest choices I could that do not waste bits.
> And why use a self-delimiting format in the first place?
Because a lambda term description has many different parts that you need to be able to separate from each other.
> And why encode de Bruijn indices in unary
I tried to answer that in more detail in this previous discussion [1].
[1] >>37584869
That's basically Platonism. I think it's a reasonable position for some things, e.g. Booleans (two-valued logic), natural/integer/rational numbers, tuples, lists, binary trees, etc. I think it's meaningful to talk about, say, the number 2, separately from the way it may be encoded in RAM as a model of e.g. the number of items in a user's shopping cart.
This position gets less reasonable/interesting/useful as we consider data whose properties are more arbitrary and less "natural"; e.g. there's not much point separating the "essence" of an IEEE754 double-precision float from its representation in RAM; or pontificating about the fundamental nature of a InternalFrameInternalFrameTitlePaneInternalFrameTitlePaneMaximizeButtonWindowNotFocusedState[0]
The question in the article is whether lambda calculus is "natural" enough to be usefully Platonic. It's certainly a better candidate than, say, Javascript; although I have a soft spot for combinatory logic (which the author has also created a binary encoding for; although its self-interpreter is slightly larger), and alternatives like concatenative languages, linear combinators (which seem closer to physics), etc.
[0] https://web.archive.org/web/20160818035145/http://www.javafi...
Reminds me of the hilarious and brilliant: http://tom7.org/nand/
I'm not sure what a "course on reading and using" has to do with description complexity? In any case, it takes 206 bits to implement a binary lambda calculus interpreter (that's Theorem 1 in http://tromp.github.io/cl/LC.pdf )
https://www.theinfinitycomputer.com/wp-content/uploads/2020/...
> This property mirrors a notion of optimality for shortest description lengths, where it’s known as the Invariance theorem:
with the latter linking to https://en.wikipedia.org/wiki/Kolmogorov_complexity#Invarian...
For example, say you're refactoring some code and come across:
def foo(x):
return bar(x)
You decide to simplify this definition to: foo = bar
Congratulations, you've just performed η-reduction! https://en.wikipedia.org/wiki/Lambda_calculus#%CE%B7-reducti...Okay, let's ignore arithmetics and just allow comparison. As you've said, a common practice is to normalize it into some standard notation with a well-founded ordering. But there is no mechanical way to convert (or even bound) a computational representation to such notation---the general approach is therefore to compute a difference and check its sign. Not really good when it can continue even after the heat death of universe...
Frankly speaking, I rather expected to see some improvement over Level-Index number systems [1], but it turns out that this post is completely unrelated to number formats. Otherwise it is good, hence my mild frustration here :S
I'm very confused that you say posits are "more or less similar" to Binary Lambda Calculus. Posits are an inert data encoding: to interpret a posit as a number, we plug its parts (sign, regime, exponent and fraction) into a simple formula to get a numerical value. Those parts can have varying size (e.g. for soft posits), but the range of results is fundamentally limited by its use of exponentials.
In constrast, BLC is a Turing-complete, general-purpose programming language. To interpret a BLC value as a number, we:
- Parse its binary representation into lambda abstractions (00), function calls (01) and de-Bruijn-indexed variables (encoded in unary)
- Apply the resulting term to the symbol `one-more-than` and the symbol `zero`
- Beta-reduce that expression until it reaches a normal form. There is no way, in general, to figure out if this step will finish or get stuck forever: even if will eventually finish, that could take trillions of years or longer.
- Read the resulting symbols as a unary number, e.g. `one-more-than (one-more-than zero)` is the number two
Posit-style numbers can certainly be represented in BLC, by writing a lambda abstraction that implements the posit formula; but BLC can implement any other computable function, which includes many that grow much faster than the exponentials used in the posit formula (e.g. this has a nice explanation of many truly huge numbers https://www.reddit.com/r/math/comments/283298/how_to_compute... )
> Therefore, for any integer N, it is computable.
> If I say: f(x) = floor(xN)
For many definitions of a real, it's not at all clear whether you can compute this f(x). The ability to compute f already amounts to being able to approximate f arbitrarily well. For example, for Chaitin's number, you cannot compute your f() except for a few small values of N.