zlacker

[parent] [thread] 17 comments
1. matja+(OP)[view] [source] 2023-11-27 20:54:05
IEEE754 64-bit representation already has infinity:

    uint64_t x = 0x7ff0000000000000ULL;
    printf("%f\n", *(double *)&x);
output:

    inf
But you could use a representation where 0 is 0, and 1 is infinity, saving 63 bits...
replies(4): >>tromp+V >>toxik+65 >>summer+0b >>loxias+Zu
2. tromp+V[view] [source] 2023-11-27 20:58:12
>>matja+(OP)
I don't consider infinity to be a number though. Especially not in a largest number contest.
replies(1): >>lowq+eg
3. toxik+65[view] [source] 2023-11-27 21:17:04
>>matja+(OP)
Fair, but also an uninteresting answer.
replies(1): >>pphysc+Ua
◧◩
4. pphysc+Ua[view] [source] [discussion] 2023-11-27 21:40:58
>>toxik+65
It's about as interesting as the other answers proposed in TFA, and it gets to the meat of it: they are all just representations invented by people, and there's nothing stopping us from inventing our own representations that fit into 64 bits (or 1 bit).
replies(2): >>toxik+Ug >>dumbo-+Gi
5. summer+0b[view] [source] 2023-11-27 21:41:09
>>matja+(OP)
That doesn't qualify the explicitly stated condition "a largest (finite) representable value".
◧◩
6. lowq+eg[view] [source] [discussion] 2023-11-27 22:09:20
>>tromp+V
Let 0 correspond to zero, and 1 corresponded to Rayo's number. Crisis averted!
replies(2): >>danbru+Yh >>tromp+li
◧◩◪
7. toxik+Ug[view] [source] [discussion] 2023-11-27 22:12:55
>>pphysc+Ua
No, it doesn’t. The question is “what is the largest non-trivial number you can represent with some constraint on size of its expression”. It’s a really old question, and saying “infinity” as an answer misses the point. Saying you can invent an arbitrary number system also misses the point by simply not answering. If you need to spend a bunch of bytes explaining your new system, did you really use 8 bytes?

It just feels really bad faith.

replies(1): >>pphysc+ni
◧◩◪
8. danbru+Yh[view] [source] [discussion] 2023-11-27 22:18:17
>>lowq+eg
Let all values encode Rayo's number. 64 bits saved!
◧◩◪
9. tromp+li[view] [source] [discussion] 2023-11-27 22:20:29
>>lowq+eg
I find Loader's number [1] more interesting, as it is actually computable, yet far far larger than other famous computable numbers, like Friedman's TREE(3) or SCG(3). I'm looking forward to one day programming it in the lambda calculus, and seeing how much smaller than the existing ~500 bytes of C-code it can be.

[1] https://www.youtube.com/watch?v=q6Etl4oGL4U&list=PL-R4p-BRL8...

◧◩◪◨
10. pphysc+ni[view] [source] [discussion] 2023-11-27 22:20:40
>>toxik+Ug
What is "really bad faith" about saying "An ON bit indicates the value 'googolplex'?"

Computing is fundamentally about decoding bit strings as different arbitrary representations that are meaningful to humans.

replies(1): >>tromp+Tn
◧◩◪
11. dumbo-+Gi[view] [source] [discussion] 2023-11-27 22:22:06
>>pphysc+Ua
Not really. The specialness of TFA's constructions is that the interpreter does not need to have any knowledge of the large numbers a priori.
replies(1): >>8note+Cp
◧◩◪◨⬒
12. tromp+Tn[view] [source] [discussion] 2023-11-27 22:47:19
>>pphysc+ni
Even the word "googolplex" is quite a bit longer than the lambda calculus program in question...
replies(1): >>8note+op
◧◩◪◨⬒⬓
13. 8note+op[view] [source] [discussion] 2023-11-27 22:53:59
>>tromp+Tn
The actual bit is just 1 though, the word "googolplex" is in the accompanying documents for interpreting the bit.

The course on reading and using lambda calculus is similarly longer than than the actual lambda calculus expression

replies(1): >>chrisw+LD
◧◩◪◨
14. 8note+Cp[view] [source] [discussion] 2023-11-27 22:55:36
>>dumbo-+Gi
Alternative contructions don't have to either - eg. "The bit 1 represents running the algorithm from TFA"
replies(1): >>dumbo-+wq
◧◩◪◨⬒
15. dumbo-+wq[view] [source] [discussion] 2023-11-27 23:00:11
>>8note+Cp
That's precisely identical to declaring the number in the interpreter.
16. loxias+Zu[view] [source] 2023-11-27 23:27:20
>>matja+(OP)
> But you could use a representation where 0 is 0, and 1 is infinity, saving 63 bits...

Reminds me of the hilarious and brilliant: http://tom7.org/nand/

replies(1): >>1lette+fI
◧◩◪◨⬒⬓⬔
17. chrisw+LD[view] [source] [discussion] 2023-11-28 00:20:57
>>8note+op
> The course on reading and using lambda calculus is similarly longer than than the actual lambda calculus expression

I'm not sure what a "course on reading and using" has to do with description complexity? In any case, it takes 206 bits to implement a binary lambda calculus interpreter (that's Theorem 1 in http://tromp.github.io/cl/LC.pdf )

◧◩
18. 1lette+fI[view] [source] [discussion] 2023-11-28 00:54:28
>>loxias+Zu
Floating format 1:1:0:1's 8 possible values:

    000: +0
    001: +1 ( denormal: (-1)^0 * 0.5 * 2^(-0+1) )
    010: +inf
    011: +qnan
    100: -0
    101: -1 ( denormal: (-1)^1 * 0.5 * 2^(-0+1) )
    110: -inf
    111: -qnan
=== Floating point crib sheet ===

--- Format ---

Sign:exponent:stored explicit mantissa leading bit:mantissa fraction:

       binary16 = 1:5:0:10
       bfloat16 = 1:8:0:7
    TensorFloat = 1:8:0:10
           fp24 = 1:7:0:16
       binary32 = 1:8:0:23 
       binary64 = 1:11:0:52
           8087 = 1:11:1:67
      binary128 = 1:15:0:112
--- Interpretation ---

leading bit = (exponent != 0) ? 1 : 0 when implicit (not stored)

bias = 2^(exponent bits - 1) - 1

value = (-1)^sign * 0 when zero

value = (-1)^sign * {{leading bit}}.{{mantissa fraction}}b * 2^(exponent - bias) when normal

value = (-1)^sign * 0.{{mantissa fraction}}b * 2^(-bias+1) when denormal

--- Classification ---

zero = exponent == 0 && mantissa fraction == 0

denormal = exponent == 0 && mantissa fraction != 0

normal = exponent != 0 && exponent != ~0

inf = exponent == ~0 && mantissa fraction == 0

nan = exponent == ~0 && mantissa fraction != 0

snan = nan && msb(mantissa fraction) == 0

qnan = nan && msb(mantissa fraction) == 1

PS: It often takes fewer gates to implement a simpler microcoded microarchitecture than to implement a single hardwired macroarchitecture. Microcoded architectures are theoretically slower than hardwired but this is often not the case in reality because of the costs of gate fanout and extra gates for clock distribution that ameliorate gains of fully specified and decoded in hardware.

[go to top]