zlacker

The largest number representable in 64 bits

submitted by tromp+(OP) on 2023-04-23 15:37:08 | 68 points 42 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
5. pipo23+Tz[view] [source] 2023-04-23 18:56:52
>>tromp+(OP)
The intro got me thinking of MDL model selection. I.e. to express X you can choose a language L that can represent X, and rather than focusing on the conciseness of just L(X) (which for some powerful L might be a single bit) it's more fair to also take the length of the language itself into account.

Then this question would be rephrased as something along the lines of "what language would fit into 64 bits and leave enough enough bits to describe a huge value in that language? And which would represent the largest value?"

https://en.wikipedia.org/wiki/Minimum_description_length

◧◩
14. networ+RO[view] [source] [discussion] 2023-04-23 20:35:17
>>mcdonj+BB
There is an alternative front end for fandom.com called BreezeWiki. It is open source (written in Racket!), and like with other open source alternative front ends, volunteers run their own instances. Here is the story link on one instance:

https://antifandom.com/googology/wiki/User_blog:JohnTromp/Th...

BreezeWiki source code: https://gitdab.com/cadence/breezewiki.

◧◩
19. nuclea+q51[view] [source] [discussion] 2023-04-23 22:29:37
>>mcdonj+BB
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

https://news.ycombinator.com/newsguidelines.html

◧◩
20. jsrcou+ga1[view] [source] [discussion] 2023-04-23 23:05:41
>>mcdonj+BB
Sounds like you'd benefit from adblock. I've used the Steven Black adblock host list [0] for some time now on all my PCs and it works extremely well.

[0] https://github.com/StevenBlack/hosts

◧◩
32. tromp+V42[view] [source] [discussion] 2023-04-24 08:49:16
>>Legion+2c1
Binary Lambda Calculus seems to spend its bits much more wisely than a Turing Machine encoding, working at a much higher level of abstraction. Defining and applying functions performs more useful work than moving a tape head around and jumping from state to state. The Turing Machine is also severely hampered by the pure sequential nature of the tape, where lambda calculus has access to all kinds of easily defined data structures.

> One strategy I'm thinking of for sufficiently large n would be to create a compression scheme for TMs

The universal variant https://oeis.org/A361211 of the lambda busy beaver can easily simulate any binary encoded TM with fixed overhead.

◧◩◪
39. Legion+ik3[view] [source] [discussion] 2023-04-24 16:41:37
>>tromp+V42
> Binary Lambda Calculus seems to spend its bits much more wisely than a Turing Machine encoding, working at a much higher level of abstraction. Defining and applying functions performs more useful work than moving a tape head around and jumping from state to state.

That's what I mean by β-reduction being a more powerful operation: it can copy a term into arbitrary points in the program. (In the BB context, I like to think of BLC in terms of substitution rather than functions.) So I wonder if the comparison is somewhat biased, since applying a TM transition is logically simpler than applying a BTC β-reduction, which involves recursively parsing the current state, substituting, and reindexing.

> The Turing Machine is also severely hampered by the pure sequential nature of the tape, where lambda calculus has access to all kinds of easily defined data structures.

I'd say TMs have plenty of data structures, but most of the useful ones are weird and alien since the the control flow has to be encoded on the tape alongside the data, vs. BLC which can precisely direct where a term should be substituted. The real hamper IMO is the locality of the tape: a TM can't move a block of data across another block of data on the tape without wasting valuable states.

> The universal variant https://oeis.org/A361211 of the lambda busy beaver can easily simulate any binary encoded TM with fixed overhead.

Of course; the trick is to go from n + c bits to n bits.

◧◩◪
42. dang+mE6[view] [source] [discussion] 2023-04-25 16:27:53
>>Chancy+Pi2
"Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that.""

https://news.ycombinator.com/newsguidelines.html

[go to top]