(And before anyone says it's not a number, call `isnan` with an infinity and get back to me :)
Then this question would be rephrased as something along the lines of "what language would fit into 64 bits and leave enough enough bits to describe a huge value in that language? And which would represent the largest value?"
If you don't want to host your own blog, consider putting it on github.
https://antifandom.com/googology/wiki/User_blog:JohnTromp/Th...
BreezeWiki source code: https://gitdab.com/cadence/breezewiki.
16881813
xenoglyph
F^^^^^^F = 15 ^ .... ^ 15
12345678
[8*8=64 bit]
According to Conway and Guy (1996) The Book of Numbers, p. 60, the arrow notation, defined by Knuth in (1976), is such that m^n is m x m x ...x m,
m^^n is m^m^ ...^m,
m^^^n is m^^m^^ ... ^^m,
and so on, with n copies of m in each case, and the expression being evaluated from the right.(Also, I've been thinking of how that BB_λ conjecture might be proven. One strategy I'm thinking of for sufficiently large n would be to create a compression scheme for TMs which omits many copies of machines and trivial machines that cannot contribute to BB_TM, to get past the naive 2n(2+log₂(n+1))-bit bound. Then, we create a BLC program with a decompressor and a TM interpreter, to which a compressed TM can be appended. But the compression would have to be good enough to get around the overhead of bit sequences not being natively expressible in BLM.)
Now if we count a number of distinct states 64 bits can take, it will still be 18446744073709551615, which I consider the only possible answer.
(Bearing in mind that the set of operations that you can accurately support in your number system also depends on the set of representable values.)
If you don't really care you can go arbitrarily high.
If we could find some set of staggeringly large values, perhaps even infinite, with some useful set of operations that could be performed on them (and mapping back to the set) then we can come up with ridiculous answers that aren't necessarily useless.
I think a more interesting question is "what's the largest number usefully representable in 64 bits?".
> lack of cheating toward wanted results
> One strategy I'm thinking of for sufficiently large n would be to create a compression scheme for TMs
The universal variant https://oeis.org/A361211 of the lambda busy beaver can easily simulate any binary encoded TM with fixed overhead.
std::numeric_limits<double>::infinity()That's what I mean by β-reduction being a more powerful operation: it can copy a term into arbitrary points in the program. (In the BB context, I like to think of BLC in terms of substitution rather than functions.) So I wonder if the comparison is somewhat biased, since applying a TM transition is logically simpler than applying a BTC β-reduction, which involves recursively parsing the current state, substituting, and reindexing.
> The Turing Machine is also severely hampered by the pure sequential nature of the tape, where lambda calculus has access to all kinds of easily defined data structures.
I'd say TMs have plenty of data structures, but most of the useful ones are weird and alien since the the control flow has to be encoded on the tape alongside the data, vs. BLC which can precisely direct where a term should be substituted. The real hamper IMO is the locality of the tape: a TM can't move a block of data across another block of data on the tape without wasting valuable states.
> The universal variant https://oeis.org/A361211 of the lambda busy beaver can easily simulate any binary encoded TM with fixed overhead.
Of course; the trick is to go from n + c bits to n bits.