I wonder if anyone has followed through with this?
What does this construction imply about BB(n)? In what sense is the BB sequence even well-defined if we can prove that it can't be determined?
(Edited for clarity.)
And I though I could win this silly game with my Knuth's Up Arrow notation and vague knowledge of Ackermann's sequence!
What is n in this construction?
We must assume this machine would loop forever, it it operated on only the rules of ZFC, because we assume ZFC is consistent. A real machine cannot loop forever, and it would be unreasonable to assume that it does. Real machines are not known to operate based solely on the rules of ZFC. ZFC is only an approximation.
Does this mean that the modern mathematician must be able to assemble all the digits of the number's representation (say, in decimal)? Given some formalization of this requirement, there would be a fairly set upper bound on the acceptable integers.
BB(BB(BB(BB(BB(11111)))))
Repeat as many BB's as you have space on your 1000 character card. You might even use math notation to define a new function where BB-(n) = [BB(BB(.......]
<---- n ----->
And then you might have: BB-(BB-(BB-(1111)))(BB-(BB-(BB-(1111)))(11111))
Or some such monstrosity.But you could go further than that: A(A(10)) is much bigger than A(1000), so you can get larger numbers faster by iterated applications of the Ackermann function. Turn the iteration into a function, and let B(n) be n applications of A to n. Iterated application of B would be even faster, so turn that iteration into a function: let C(n) be n applications of B to n.
But this process itself is iterative. So, define a new function: let iterA(n) be the nth level of iterated Ackermann functions applied to n, where iterA(1) is A(1), iterA(2) is B(2), iterA(3) is C(3), and so on.
Now, iterA can be applied iteratively. So, repeat again. The resulting function can be applied iteratively, so repeat again.
And this whole process of turning iteration into a function and then iterating that function is itself an iterative process, so it can be turned into a function...
Let A = {fastest published sequence}. # This is a macro for BB, Ackermann's, etc.
Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^10(A, 10)
Of course, given the nature of this problem, you can define your converging algorithm to terminate when it is withing a certain error of what the "pure" answer would be. Of course, it is possible you would underestimate the number, and if someone could beet you by using the same method with a slightly different termination condition, so you may want to add 1...
Then make the Greek letters the entire alphabet iteration process (meaning BB-n through Z(n)). And then make the Hebrew letters the entire Greek letter process.
Look at how much fun we're having!
The BB-2 used in an iterative algorithm is talking about "A busy beaver number of (A busy beaver number of 10)", no need for super machines here. The iterative algorithm is short enough you can define it in your 1000 characters, no need for additional publication.
I think it may help you to understand how the BB sequence can be well-defined if you change your definition of "we can prove".
A proof is a sequence of logical deductions in an axiomatic system, so what "we can prove" depends a lot on which axiom system you're using. For some types of questions, it helps to make that explicit.
For instance, in weak axiom systems, we can't prove that every hydra game terminates. [0] In very strong axiom systems, ones so strong we call them inconsistent, we can prove everything! People argue over intermediate systems sometimes by pointing out things they can prove that are counter-intuitive. [1][2]
For any terminating Turing machine, there is a (possibly very long) proof in a very weak axiom system of that fact: the full trace of the execution. (For non-terminating machines, this is not true.) So, if ZFC, a rather strong axiom system, cannot prove a machine halts, it does not halt.
For another example of this kind of thing, see Terry Tao's discussion of the consequences of the independence of Goldbach's conjecture from ZFC on MathOverflow. [3]
[0] https://en.wikipedia.org/wiki/Goodstein%27s_theorem [1] https://en.wikipedia.org/wiki/Freiling%27s_axiom_of_symmetry [2] https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox [3] http://mathoverflow.net/a/27764
...if the author is here, he should correct this.
Every BB number can be determined. However, each one must require zillions of special cases which require their own reasoning---this is the only way to prevent the BB numbers from being computable.
So, if you consider, for example, the proof of the four-color theorem (which amounts to checking many special cases), as mathematics, then mathematics surely can determine BB(n). It's just extremely hard---as hard as doing all of mathematics (in the sense that if you could easily get BB(n) you could then take a logical proposition you are interested in proving in some axiomatic system and construct a TM to derive the statement using those axioms. Then, count how many states that TM has, and start running it. If it runs for more than BB(# states), there is no proof from your system of your proposition).
Like you run all n-state Turing Machines for a super long time, and take the max of the output of all the ones that have halted. Now this might give you BB(n), but you can't be sure because you don't know whether the ones that haven't halted are ever going to halt.
In mathematics, we can just say "Well consider the set of all the ones that do halt, and take their max." In real life, it's not so easy to "consider the set".
I wrote a comment down below about how one could in principle determine a number which is probably BB(n), but you could never be sure. But I just had the crazy thought that if a human brain is really just an N-state Turing machine for some giant N, then any human would either wait forever or give up before finding the true BB(n) for some n. Time for bed!
One of the most important sentences in understanding them is one that's easy to pass by in the original work: "Indeed, already the top five and six-rule contenders elude us: we can’t explain how they ‘work’ in human terms."
That is, while we humans are thinking we're all clever by defining repeated iterations of the BB ruleset itself, all of these things that we think we are being so clever with are actually very very compactly described by Turing Machine states. Meanwhile, even by BB(6) the TM is metaphorically "using" incomprehensible tricks to be larger than we'd even dream. If repeated iterations of BB'ness can be trivially described in, say, 10 states, and let's say we reserve 4 states for BB(4) to describe the number of times to iterate, our hand-constructed "clever" re-iteration of the BB procedure will be blown away by some other 10-state machine that we can't even imagine would ever terminate.
So on the one hand, yes, BB(BB(20)) is incomprehensibly larger than merely BB(20), and yet, on the other hand, in a profound way, it also doesn't matter. Busy Beaver in a sense shows us numbers that have all the practical properties of infinity, even if they are not technically infinite. In a sense BB(30) < BB(31) is obviously true, yet, ultimately, a humanly meaningless statement, since grasping either number in any sense beyond its mere definition is impossible. We might as well say that "blibble" is less than "bloo". And not merely impossible in the sense that it is impossible to even really properly grasp just how far it is to Alpha Centauri, but impossible in the sense that we are literally incapable of even constructing mathematical tools that will let us grab on to and manipulate such numbers... deeply, profoundly, computationally impossible for us to understand, not merely "mind-blowing", but impossible for us to understand in the absolute strongest sense of the term.
Similarly for trying to catch up to BB with yet more iterations of exponentiation... the procedure we humans use is shockingly, shockingly simple to describe programmatically, which means that BB automatically already encompasses practically all such tricks very early in its sequence, which means you can't defeat it that way.
Busy Beaver is metaphorically (again) much smarter than you, and it's not worth your time to try to outsmart it to make yet again bigger numbers.
This also, probably correctly, implies that attempting to adjudicate some contest in which some people write BB(BB(BB(x)))) vs some other BB-based notation is also impossible and you'd actually fail out of the contest for writing an ill-defined number, as if we can't compare the two numbers for which is larger, even in principle, it is for the purposes of this contest, ill-defined. Busy Beaver in a sense also sets the limit of what a well-defined number even can be, and is thus in some sense the natural limit of this contest by virtue of simply transcending through sheer incomprehensible size all our computationally-naive human tricks for making big numbers, or even just numbers of any size at all.
It's really a profound sequence.
I would think instead ZFC can't prove or disprove that this machine halts, which just means it's undecidable wether it halts or not.
This also implies that for sufficiently large n, we won't be able to prove that BB(n) = N, or even BB(n) < N for any N using the ZFC axioms. Of course, although they can't compute or bound its value, they can still easily prove that BB(n) is finite.
He also relates these issues to Christian philosophy, which I find very interesting. In particular, he claims that the a priori belief in the objects defined by Peano Arithmetic, is equivalent to worshipping numbers, as the Pythagoreans did.
I think this is the best starting point if you're interested in reading about his ideas: https://web.math.princeton.edu/~nelson/papers/warn.pdf
I'm not sure what this question means. Are you referencing an earlier discussion of a machine that demonstrated a proof of "0=1" using the axioms of ZFC? If so, what is your concern?
When thinking about undecidability/independence from a computational perspective, it's helpful to be cautious about the phrase "does not halt". How would you know a machine "does not halt"? You might run it for a long time and observe that it has not halted yet, but that's not the same.
Instead, you can talk about "a proof in $AXIOM_SYSTEM that this machine does not halt".
So, using that terminology, consider a machine that enumerates all valid proofs of ZFC and checks if their conclusion is "0 = 1". Certainly, if ZFC is inconsistent, this machine will find a proof and halt. However, if there is no such proof, then ZFC is consistent, the machine will not halt, but there will be no proof in ZFC that the machine will not halt.
This ability to enumerate the proofs is related to semidecidability/recursive-enumerability.
When you say "for sufficiently large n, we won't be able to prove that BB(n) = N", I agree. There is some n that is so large that there is a Turing machine of size n that does loop, but the fact that it loops will not be provable in ZFC.
http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_the...
They can't be computed - they can only, at best, be proven to be within some bounds.
But it's unimportant since it's not really required in order to do what you said.
(large pdf) https://web.math.princeton.edu/~nelson/books/pa.pdf
the busy beaver function grows so fast that for some number N it's always going to be larger than any number we can describe in normal math notation
so if you define Graham's number as a function of N like g(N) where Graham's number is g(64), busy beaver still grows faster and for some N it will be bigger
Actually, the speed would be limited by your linguistic or programmer ability to concisely state the level of recursion you want (ideally the logical maximum) given all the previous inputs up until the recursion statement -- e.g. "now recurse steps a, b, and c and call that d" is not as strong as "imagine how the slope of the exponential curve increased between a and b, and the difference between that and b and c, and raise c to the power of both of those increases combined raised to the power of a, b, and c respectively" .. but obviously the second sentence is longer and more complex as well as "bigger".
http://m-phi.blogspot.com/2011/10/nelson-withdraws-his-claim...
The whole episode: - Reputed Princeton mathematician publishes paper questioning the foundations of math - Terry Tao 'disproves' him in a series of blog comments is one of the more interesting recent happenings in the world of math
I was referring to earlier work by Nelson in which he explains why he thinks Peano Arithmetic may be inconsistent.
Later, in the incident you referred to, he claimed to have an actual proof. Terry Tao showed the proof was wrong, and Nelson accepted this.
Usually this is left out, because Peano Arithmetic is treated as a MVP for mathematics. But Nelson claims Peano Arithmetic may be inconsistent, and proposes a weaker system.
And also, the Big Number Duel. [2]
[1] http://forums.xkcd.com/viewtopic.php?t=7469 [2] http://tech.mit.edu/V126/N64/64largenumber.html
1) Not unique to exponentiation, it's also true of 281474976710656 + 79766443076872509863361.
2) This is a vague notion and you're going to have to deal with the problem of the smallest number that isn't expressible (I believe it might be the same as "feasible numbers", and there's a whole theory here)
3) the whole tenor of his critique suggests something a bit more foundational (predicativity, I guess...?)
As you say, the notion of a "real number" is vague without the details (which I'm not really able to describe properly, because I'm only summarizing something I vaguely understand. I'm not an expert on mathematical logic). The best source is Nelson's book "Predicative Arithmetic", https://web.math.princeton.edu/~nelson/books/pa.pdf In the book, he defines what he means by a real number, and shows why exponentation doesn't satisfy the property that if a^X is a real number, then a^(X+1) is.
1. can prove the machine halts for any machine that halts, but
2. cannot necessarily prove that non-halting machines don't halt
This is the distinction between recursively enumerable sets and decidable sets, and it comes up everywhere. The first implies that "for every true p, you'll eventually find a proof" and the second says "for every p, you'll eventually either find a proof of p or a disproof."
That's a very fair criticism.
My writing is muddy, for sure. Let me try again. You ask:
> But if the machine that stops when it has proven ZFC is inconsistent does not halt, then surely it means there is no proof of the inconsistency of ZFC? Hence ZFC is consistent? Which is contradicted by Godel.
I do not think this is contradicted by Goedel. Goedel's theorem implies that ZFC can't prove its own consistency unless it is inconsistent. So, how can this machine that searches for an inconsistency in ZFC (by enumerating all possible valid ZFC proofs so that for any proof p, there is some time t such that the machine checks p by time t) run forever, thereby proving ZFC consistent?
It is by the unprovability in ZFC (unless ZFC is inconsistent) of the fact that this machine will run forever.
The fact that this machine will run forever implies the consistency of ZFC, and the consistency of ZFC implies the machine will run forever.
You can see the results here: http://djm.cc/bignum-results.txt -- there's some interesting stuff in there!
u[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]];Nest[u[#^#^#][#]@#&,9,u[99][9]@9]
u[n][a][b] gives a (Knuth's up arrow)^n b.
The after-the-semicolon expression computes f(f(f(f(... u[99][9][9] fs total ... f(9) ... ))))
with the function f(n)=u[n^n^n][n][n].
This clearly results in a finite number, since it is just iterated iterated iterated iterated ... (finitely many "iterated"s) ... iterated exponentiation of finite numbers.However, even when I try to compute (after $RecursionLimit=Infinity)
Nest[u[#^#^#][#]@#&,2,u[2][2]@2]
my kernel crashes. This number is BIG.There is one obvious way to make this number even bigger: make the base case yield a^b. However, then it's not Knuth's up arrow notation, so it's harder to debug by looking at the wikipedia page :). I used all my tricks (like using @) to get rid of extraneous characters, which gave me space to put #^#^# as the first argument of u. I still had 1 character remaining, so a 9 became 99. If you can squeeze a few more characters #^#^# and 99 should be substituted for u[#][#]@# and 9.
Unless you can always program A(x) in less than x steps on a Turing machine?
I think this is really the key to your point. Part of me wonders whether this should even make the example in the essay, BB(111), invalid. If there is no way in our universe to ever know exactly what BB(111) is, can it really be considered a well-defined number? That said, I'm no mathematician, so I'm probably off-base here.
My first though reading the challenge was a series of 7^7^7^7... since the 7s would nest together nicely (unlike 9s) :P
The explanation was that: "Because if a Turing machine could compute a sequence that grows faster than Busy Beaver, then it could use that sequence to obtain the D‘s—the beaver dams." But these goals seem different to me.
Another sequence you cannot compute is "foo(x)" - the number of integers lower than x for which algorithm "bar(x)" doesn't halt. It's impossible to compute if "bar(x)" doesn't always halt. But the sequence cannot grow faster than "x" itself.
Also, it doesn't matter that you cannot compute them if every TM of a given size can be analysed and proven halting. It doesn't have to be computed (as in, automatically checked).
The fact you can compute A(x) would only be an issue if you could compute A(x) with a machine of size smaller than x.
u[n_][a_,b_]:=If[n==0,a b,Nest[a~u[n-1]~#&,1,b]];Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
I should also note that I'm not confident as to which of Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
Nest[#~u@#~#&,9,9~u[9^9^9^9]~9]
is larger.Your example of foo/bar(x) is not quite correct. If you fix a program bar, then it is not necessarily impossible to characterize (and hence compute) the set of strings for which bar(x) halts. This is not the statement of the Halting problem, and as far as I know there is no theorem that says such a function "bar" exists.
From this it follows (by proof by contradiction!) that BB(n) is undecidable/uncomputable (getting a bit hazy on these definitions). In any case, we cannot prove that BB(n) = K for any K.
Is it a problem that for all K we cannot prove that BB(n) = K? I don't think so, this is just another incompleteness result.
Also see this old HN discussion where Eliezer proposed a good way to beat the winner: https://news.ycombinator.com/item?id=1542770
9^9^9 =~ 2e77
9!! =~ 1.6e1859933Conversely if you want to insist on a "constructible" number then you kind of have to insist on unary notation. Maybe a computer can give you a numerical value for "11^11^11^11", but can you really consider it well-defined if you can't put that many pebbles in a row? There are a few mathematicians who take this kind of view - see http://en.wikipedia.org/wiki/Ultrafinitism - but it's very much an extreme position.
Furthermore, factorials don't really scale or stack easily. What the author is getting at in the relevant location is stacking the same concept: 1. Multiplying is just adding the same number several times. 2. Exponentiation is just multiplying the same number several times. 3. Tetration is just exponentiation several times. 4. Etc.
This allows us to generate the infinite hierarchy easily expressible by the ackerman numbers (which is basically A(i) = f_i(i,i)), which doesn't generate itself as easily with factorialization in place of exponentiation.
Most of us take some axioms and believe them, ultimately because they correspond to physical experience - if you take a pile of n pebbles and add another pebble to the pile, you get a pile of n+1 pebbles, and n+1 is always a new number that doesn't =0 (that is, our pile looks different from a pile of 0 pebbles). Maybe there's some (very large) special n where this wouldn't happen - where you'd add 1 and get a pile of 0 pebbles, or where you can have n pebbles but you can't have n sheep. But so far we haven't found that. So far the universe remains simple, and the axioms of arithmetic are simpler than conceivable alternatives.
The problem arises when you say that "a programmer should be able to grind out" whether or not a TM halts or not, which you use to get around the fact that a TM cannot solve the halting problem. However, I'd question if this is a trivial exercise: while we certainly are capable of recognizing infinite loops in the code we write, I'm not certain that humans can identify arbitrary infinite loops. Obviously, whether or not we can isn't a trivially answerable question, as it comes down to whether or not our brain's neural networks can be modeled by a sufficiently large TM, and even if it cannot be modeled by a sufficiently large TM, what differences between our brains and a TM exist and why those would effectively allow us to solve the halting problem.
So I'd question whether finding the BBs is "'just' a matter of computation", because I'm not convinced that humans can solve the TM halting problem.
(Well, strictly we can prove that 2+2=4 in Presburger arithmetic which is provably complete and consistent. But any result that we prove by contradiction and that uses the higher axioms (e.g. Hilbert's basis theorem) is implicitly assuming the (unprovable) consistency of ZFC).
Mostly this doesn't bother working mathematicians - ZFC seems reasonable, corresponds to the models that we do have, so we just take it on trust. People who care explicitly about these issues might work in a "higher" axiom system to prove "metamathematical" facts (e.g. ZFC + weakly inaccessible large cardinal, under which we can prove that ZFC is consistent and also that this particular Turing machine doesn't halt).
((Of course the whole point of Gödel's incompleteness theorems is that there's no sharp line between mathematical and metamathematical; any unproven proposition might turn out to be an unprovable landmine. But this doesn't tend to bother people. After all, the proposition might just as easily turn out to simply be very difficult to prove, and the practical impact would be much the same))
He gives great explanations of things like the Conway chained arrow notation and the fast-growing hierarchy and just keeps on making bigger and bigger numbers.
Compose a state space that expands as quickly as possible in terms of the argument(s) and define some property of the state space as the result.
Looking at it this way, I'd say:
"Hyper-exponentiation" (Ackermann, tetration and so on) is basically counting the number of states in a well-defined state space.
BB(X) is equivalent to generating automatically the state space (via a search over all possible state spaces that can be constructed with a TM with X states) and returning the size of the maximum.
This means that BB(X) is smaller than the sum of the sizes of all possible state spaces constructible via that TM.
But BB(X+1) is aware of that trick (summing state space sizes) and uses it already (or more probably something better), so it doesn't really matter and you might as well just increase X by one to include all the tricks you had in mind for BB(X).
Anyone thoughts on that?
Imagine 2^(7n) copies of the judge of this competition. For every judge create a different bit sequence of length 7n. Decode it from ascii and ask if it is a valid entry of the competition. Take the largest valid entry of those imaginary competitions. That is my entry. In order for this procedure to be consistent, this entry must be longer than n characters. Hence it will end with some padding you may disregard: asdfasdfasdfasdfasdfasdfasdfasdfasdfasdf.
I've been taking a somewhat metaphysical approach to thinking about it. Instead of thinking about a Turing Machine as just a head on an endless tape, can we not consider the Universe to represent the upper limits for our calculations? Just as Turing suggested that we can take the two dimensional representation of a mathematical equation and represent it in one dimension, can we represent the entire state of the Universe in a similar way on a hypothetical Turing Machine? The halting problem is then a question of whether or not the Universe has a halting state.
To extend from that, any machine that can be conceived must fit within the bounds of the Universe. Of course BB(Universe) is incalculable, but I think that it defines an upper limit. It is pointless to consider a Turing Machine that features an endless tape if an endless tape is an impossibility.
In not sure if this adds to the discussion, but I haven't had anyone else I could discuss this with until now.
And of course my method can improved with iterations that also can be resolved by creating a new notation. It's really never ending.
A[A[A(n)](n)](n)
And it's all computable, so the Busy Beaver grows faster than any* of these.
More generally, I expect there to not be any "meta-algorithm" for naming big numbers. Every big step will require a new insight. A similar problem is inventing notations for large ordinals: http://en.wikipedia.org/wiki/Ordinal_notation
You also have to remember that some quantities are going to be continuous, while TMs are discrete. Even an "infinite" TM can't truly encode real numbers (I think) because its infinity is of the countable variety, while Real numbers are uncountably infinite.
There's a branch of Maths called constructivism which requires values/proofs/etc. to be "constructable" in principle. For example, the law of the excluded middle ('for all X, either X is true or (not X) is true') is not constructive, since it doesn't give us a value: we don't know whether we're dealing with X or (not X).
http://en.wikipedia.org/wiki/Constructivism_(mathematics)
Constructive mathematics turns out to be very closely related to computation, and is one reason why functional programming gets so much research attention.
Constructivists don't have a problem with infinite objects, like the set of all Natural numbers, if they can be represented in a finite way (eg. as a function). There's another branch of Mathematics knows as finitism, which regards infinities as not existing. What you're describing is ultrafinitism, which regards really big things as not existing: http://en.wikipedia.org/wiki/Ultrafinitism
Or, as N becomes large and can emulate the english language use berry's paradox style algorithm to show there is no limit to what size you want.
Another approach would be to use a probability-based event for halting, and so it is always possible that a series of coin flips could keep coming up heads 1,000,000 times in a row, or a zillion times, see: st. petersburg paradox.
As far as a mathematician is concerned, there is no difference between 9^9, 387420489, 387420499 - 10, 774840978 / 2, or 193710244.5 * 2. They all represent the same quantity. That some are not in their fully reduced form does not change that they are unambiguous representations of the same number.
Even if you are unable to see it that way, we can easily rephrase the exercise to be "write down a way to compute the largest number you can in 15 seconds". This is an enlightening mental exercise, it would be silly to dismiss it for such trivial reasons.
This would imply BB(n) is not well defined. To the contrary, BB(n) is finite for every n.
2) Question: I understand the argument for noncomputability of Busy Beavers given, but couldn't you just argue that BB(n) is finite, therefore BB(n) is a computable sum of 1+1+...+1, therefore BB(n) is computable given a finite amount of time, so any given number in the sequence is itself computable, therefore by mathematical induction all elements in the sequence are computable? Clearly not, but I don't understand why this doesn't work.
2) For each individual BB number, there's a program that prints it. But there's no program that prints the whole infinite sequence of BB numbers one by one. All finite sequences are computable, but not all infinite sequences are.
One more reason I'm happy English is the most common international language I guess.
Edit: Ah wait, I get it now. Misunderstanding was in confusing "Turing machine" with "universal Turing machine." Any given T(N) will have a finite number of steps it can complete before halting, and since a universal Turing machine or equivalent (e.g. rule 110) can emulate any arbitrary Turing machine, I was modeling it as just letting an arbitrary UTM run forever. But the UTM is just emulating a specific T(N) each time, and you'd have to keep going up emulating different machines for each N to get to each next BB without running out of steps. So you can write programs to calculate arbitrary BBs indefinitely, but no single program to calculate them all. Hence the infinite sequence is noncomputable despite any finite length of it being computable. Thanks for the point in the right direction.
[1] https://en.wiktionary.org/wiki/four_score_and_seven_years_ag...
Here is my modification:
M=Nest;
u[f_][n_][a_]:=If[n<1,f@a,M[u[f][n-1],a,a]];
u[#][#@9][#@9]&@(u[#!&][#][#]&)
82 chars total.comments:
(*start with definition of Knuth up arrow*)
u1[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]]
(*let treat 1 as symbol and take 1 == b == a *)
u2[n_][a_]:=If[n==0,a a,Nest[u[n-1],a,a]]
(*next define for arbitrary function f instead of multiplication*)
u[f_][n_][a_]:=If[n==0,f@a,Nest[u[n-1],a,a]]
(*numerical example when we take n<3 instead of n==0*)
u[#! &][#][#] &@3 = u[#! &][3][3] = 10^1746
(*Next take the function f and parameters a to be: *)
f = u[#!&][#][#]&
a = f@9
(*compute final number*)
u[f][a][a]
(*those 3 steps are shortened to: *)
u[#][#@9][#@9]&@(u[#!&][#][#]&)^Although his reasoning is that it is not valid because it is always changing, not specifically because it is unknown. Still, I assume that "Number of grains of sand in the Sahara at exactly midnight, Jan. 1 2050, on this atomic clock" would also be disallowed, even though it's possible we would somehow be able to know exactly what that number is in the future.
If we presume that the Big Bang model of the Universe is true, then in the earliest moment of time, can the state of the Universe be measured? If it can be measured, how does it transition from a measurable state to an immeasurable one?
I'll have to give some more thought to what you've said, but I wanted to give you a basis for why I believe it could be deterministic. Clearly I'm making some assumptions, and based on my postulate, this is a state that we could never measure ourselves. I don't know if our inability to make these measurements removes the possibility of determinism.
But remember:
Never forget that it is a waste of energy to do the same thing twice, and that if you know precisely what is to be done, you need not do it at all. --- E. E. ``Doc'' Smith (1930)
...the optimal solution avoids all pattern.
--- Douglas Hofstadter (1983)
http://djm.cc/bignum-results.txtSo I would recommend to avoid things like f@f@f@f@a where there is clearly a pattern.
Maybe this quote from the googology wiki will set you on a more productive track:
Googologists generally avoid many of the common responses such as "infinity," "anything you can come up with plus 1," "the largest number that can be named in ten words," "the largest number imaginable," "a zillion," "a hundred billion trillion million googolplex" or other indefinite, infinite, ill-defined, or inelegant responses. Rather googologists are interested in defining definite numbers using efficient and far reaching structural schemes, and don't attempt to forestall the unending quest for larger numbers, but rather encourage it. So perhaps a more accurate description of the challenge is: "What is the largest number you can come up with using the simplest tools?"
If so, I think I understand what you mean and I agree with you.
My thoughts were a bit confused on these topics so my posts probably contain some reasoning errors, but in the end I think what matters is we agree on the answer to OP's question, i.e.
For any given theory T at least as strong as ZFC (and maybe even some weaker ones) then
- BB as function is well defined in T - However there is some number n_T after which T can't prove any upper bound for BB(n) for any n > n_T
I think this is a very interesting result because the fact that any axiomatic system will be powerless to describe this function's growth after only a small number of steps expresses pretty well how mind numbingly fast it grows.
Assume by contradiction that there is a computable function f that grows faster than the busy beaver problem. We can then make a Turing Machine that takes input n, calculates f(n), and then constructs and simulates every n-state Turing Machine for f(n) steps. Since f grows faster than the number of steps the most productive TM would take, any simulated machine in not in the halt state would never halt, and we could select the busiest beaver out of the remaining Turing Machines. We have constructed a TM that computes the busy beaver problem, which is a contradiction. Thus, there can be no computable function that grows faster than the busy beaver function.