zlacker

The largest number representable in 64 bits

submitted by tromp+(OP) on 2023-11-25 16:01:22 | 111 points 99 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
8. tromp+Ne6[view] [source] [discussion] 2023-11-27 21:07:21
>>addaon+ab6
> we can choose any computable number to be encoded by the string 64'b1

First of all, every finite number is computable by definition.

And second, your encodings will, unlike those in the lambda calculus, be completely arbitrary.

PS: in my self-delimiting encoding of the lambda calculus, there are only 1058720968043859 < 2^50 closed lambda terms of size up to 64 [1].

[1] https://oeis.org/A114852

◧◩
10. pmayrg+Tg6[view] [source] [discussion] 2023-11-27 21:15:28
>>addaon+ab6
came here and searched for "busy". good job :)

To the sibling comment about arbitrariness, we could use a hybrid where we trade off some bits from IEEE FP to introduce far reaches and also some precision there.. so like, keep 32 bits or 54 bits for IEEE compatibility, then switch to "extended" ranges for e.g. BB numbers, higher alephs, etc..

There was this one system for calculation with infinities that avoided the Hilbert Hotel problem.. can't find it but was called smth like Infinioid or some other play on the name. Would be neat to bolt those on too :)

Edit: "grossone" is the calculus for infinities.. love this work! https://www.theinfinitycomputer.com/

17. tromp+Hm6[view] [source] 2023-11-27 21:39:35
>>tromp+(OP)
I originally wrote this article back in March on the googology website, and it received some discussion at >>35677148

Since some commenters pointed out how awfully spammy that website is (which I had failed to notice due to my browser's adblockers), I recently decided to slightly rewrite and expand the article to host it on my newly formed personal blog.

◧◩◪
20. chrisw+Jq6[view] [source] [discussion] 2023-11-27 21:59:53
>>pmayrg+Tg6
> To the sibling comment about arbitrariness, we could use a hybrid where we trade off some bits from IEEE FP to introduce far reaches and also some precision there.. so like, keep 32 bits or 54 bits for IEEE compatibility, then switch to "extended" ranges for e.g. BB numbers, higher alephs, etc.

This sort of trick/hack is the reason why theorems in (algorithmic) information theory involve constant factors. For example, we can define an image compressor which outputs a single `1` when given the Lenna test image[0], and otherwise acts exactly like PNG except prefixing its output with a `0`. To decode: a `1` decodes to the Lenna image, and anything starting with `0` gets decoded as PNG without the leading `0`. This gives perfect compression with no loss of quality, when tested on that Lenna image ;)

[0] https://en.wikipedia.org/wiki/Lenna

27. kstrau+nu6[view] [source] 2023-11-27 22:20:21
>>tromp+(OP)
I'm going with '9↑↑9', which is 64 bits of UTF-8. (See https://en.wikipedia.org/wiki/Knuth's_up-arrow_notation)
◧◩◪◨
28. tromp+qu6[view] [source] [discussion] 2023-11-27 22:20:29
>>lowq+js6
I find Loader's number [1] more interesting, as it is actually computable, yet far far larger than other famous computable numbers, like Friedman's TREE(3) or SCG(3). I'm looking forward to one day programming it in the lambda calculus, and seeing how much smaller than the existing ~500 bytes of C-code it can be.

[1] https://www.youtube.com/watch?v=q6Etl4oGL4U&list=PL-R4p-BRL8...

◧◩◪
43. tromp+hC6[view] [source] [discussion] 2023-11-27 22:58:33
>>fanf2+Jl6
Note that 'billion' and 'trillion' themselves have multiple meanings [1] [2], even though most people use only the short scale.

[1] https://en.wikipedia.org/wiki/Billion

[2] https://en.wikipedia.org/wiki/Trillion

◧◩◪◨
45. tromp+xD6[view] [source] [discussion] 2023-11-27 23:06:09
>>Legion+rx6
> but the choices made in encoding the binary lambda calculus are somewhat arbitrary

I made the simplest choices I could that do not waste bits.

> And why use a self-delimiting format in the first place?

Because a lambda term description has many different parts that you need to be able to separate from each other.

> And why encode de Bruijn indices in unary

I tried to answer that in more detail in this previous discussion [1].

[1] >>37584869

◧◩◪
46. chrisw+KF6[view] [source] [discussion] 2023-11-27 23:19:33
>>teknop+uh6
> People sometimes mistakenly think that numbers or data in computers exist in some meaningful way.

That's basically Platonism. I think it's a reasonable position for some things, e.g. Booleans (two-valued logic), natural/integer/rational numbers, tuples, lists, binary trees, etc. I think it's meaningful to talk about, say, the number 2, separately from the way it may be encoded in RAM as a model of e.g. the number of items in a user's shopping cart.

This position gets less reasonable/interesting/useful as we consider data whose properties are more arbitrary and less "natural"; e.g. there's not much point separating the "essence" of an IEEE754 double-precision float from its representation in RAM; or pontificating about the fundamental nature of a InternalFrameInternalFrameTitlePaneInternalFrameTitlePaneMaximizeButtonWindowNotFocusedState[0]

The question in the article is whether lambda calculus is "natural" enough to be usefully Platonic. It's certainly a better candidate than, say, Javascript; although I have a soft spot for combinatory logic (which the author has also created a binary encoding for; although its self-interpreter is slightly larger), and alternatives like concatenative languages, linear combinators (which seem closer to physics), etc.

[0] https://web.archive.org/web/20160818035145/http://www.javafi...

◧◩
48. loxias+4H6[view] [source] [discussion] 2023-11-27 23:27:20
>>matja+5c6
> But you could use a representation where 0 is 0, and 1 is infinity, saving 63 bits...

Reminds me of the hilarious and brilliant: http://tom7.org/nand/

◧◩◪◨⬒⬓⬔⧯
56. chrisw+QP6[view] [source] [discussion] 2023-11-28 00:20:57
>>8note+tB6
> The course on reading and using lambda calculus is similarly longer than than the actual lambda calculus expression

I'm not sure what a "course on reading and using" has to do with description complexity? In any case, it takes 206 bits to implement a binary lambda calculus interpreter (that's Theorem 1 in http://tromp.github.io/cl/LC.pdf )

◧◩◪◨
58. chrisw+wR6[view] [source] [discussion] 2023-11-28 00:32:45
>>eru+2K6
The author has also implemented Binary Combinatory Logic. I also find combinators more "natural" than lambda calculus (although the latter is certainly more useful day-to-day!); however, I still find lambda calculus a reasonable foundation since the Binary Lambda Calculus self-interpreter is shorter than the Binary Combinatory Logic self-interpreter (e.g. see the author's page at http://tromp.github.io/cl/cl.html )
◧◩◪
61. pmayrg+LY6[view] [source] [discussion] 2023-11-28 01:32:56
>>pmayrg+Tg6
anyone really interested, i'm just rereading an earlyish (2010) paper by Sergeyev.. it's amazing

https://www.theinfinitycomputer.com/wp-content/uploads/2020/...

◧◩
76. tromp+pD7[view] [source] [discussion] 2023-11-28 08:37:41
>>userbi+da7
I mention Kolmogorov complexity in the form of "shortest description lengths" at

> This property mirrors a notion of optimality for shortest description lengths, where it’s known as the Invariance theorem:

with the latter linking to https://en.wikipedia.org/wiki/Kolmogorov_complexity#Invarian...

◧◩◪◨⬒⬓
80. chrisw+mP7[view] [source] [discussion] 2023-11-28 10:37:22
>>Madnes+Cu7
Lambda calculus is present in most "modern" programming languages, i.e. those which use lexical scope rather than global or dynamic scope; and those which call functions to return results (especially first-class functions) rather than jumping to subroutines for their effects. It's why Python functions are written with the keyword `lambda`, and it's why Amazon's function-as-a-service product is called Lambda.

For example, say you're refactoring some code and come across:

    def foo(x):
      return bar(x)
You decide to simplify this definition to:

    foo = bar
Congratulations, you've just performed η-reduction! https://en.wikipedia.org/wiki/Lambda_calculus#%CE%B7-reducti...
◧◩◪
81. lifthr+BV7[view] [source] [discussion] 2023-11-28 11:53:20
>>tromp+OD7
I mean, as others pointed out, a number format typically needs much more than just a representation. If there is a bit-size limitation any operation has to round any excess back into the limit. A computational representation based on Turing machines or lambda calculus---again, which is not very different from a constructive real number in usage---do not provide an easy way to do that. That wouldn't have been an issue if there was no bit-size limitation.

Okay, let's ignore arithmetics and just allow comparison. As you've said, a common practice is to normalize it into some standard notation with a well-founded ordering. But there is no mechanical way to convert (or even bound) a computational representation to such notation---the general approach is therefore to compute a difference and check its sign. Not really good when it can continue even after the heat death of universe...

Frankly speaking, I rather expected to see some improvement over Level-Index number systems [1], but it turns out that this post is completely unrelated to number formats. Otherwise it is good, hence my mild frustration here :S

[1] https://www.mrob.com/pub/math/altnum.html#li_sli

◧◩◪◨
92. chrisw+mvb[view] [source] [discussion] 2023-11-29 12:23:06
>>firebo+wN9
> posits are superior(imo, granted, they're more or less similar) to BLC, depending on how they're implemented, of course

I'm very confused that you say posits are "more or less similar" to Binary Lambda Calculus. Posits are an inert data encoding: to interpret a posit as a number, we plug its parts (sign, regime, exponent and fraction) into a simple formula to get a numerical value. Those parts can have varying size (e.g. for soft posits), but the range of results is fundamentally limited by its use of exponentials.

In constrast, BLC is a Turing-complete, general-purpose programming language. To interpret a BLC value as a number, we:

- Parse its binary representation into lambda abstractions (00), function calls (01) and de-Bruijn-indexed variables (encoded in unary)

- Apply the resulting term to the symbol `one-more-than` and the symbol `zero`

- Beta-reduce that expression until it reaches a normal form. There is no way, in general, to figure out if this step will finish or get stuck forever: even if will eventually finish, that could take trillions of years or longer.

- Read the resulting symbols as a unary number, e.g. `one-more-than (one-more-than zero)` is the number two

Posit-style numbers can certainly be represented in BLC, by writing a lambda abstraction that implements the posit formula; but BLC can implement any other computable function, which includes many that grow much faster than the exponentials used in the posit formula (e.g. this has a nice explanation of many truly huge numbers https://www.reddit.com/r/math/comments/283298/how_to_compute... )

◧◩◪◨⬒⬓⬔⧯
96. tromp+dTf[view] [source] [discussion] 2023-11-30 16:23:44
>>TheNew+JX7
You were looking at the definition of a computable real number [1]. The issue there is whether you can approximate the number arbitrarily well. An issue that is nonexistent in integers, as you note in

> Therefore, for any integer N, it is computable.

> If I say: f(x) = floor(xN)

For many definitions of a real, it's not at all clear whether you can compute this f(x). The ability to compute f already amounts to being able to approximate f arbitrarily well. For example, for Chaitin's number, you cannot compute your f() except for a few small values of N.

[1] https://en.wikipedia.org/wiki/Computable_number

[go to top]