The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.
(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)
edit: To clarify further, you could create a new formal language L+ that axiomatically defines 0 as "largest number according to L", but that would no longer be L, it would be L+. For any given language with rules at this level of power you could not make that statement without creating a new language with even more powerful rules i.e. each specific set of rules is capped, you need to add more rules to increase that cap, but that is a different language.
[1] https://en.wikipedia.org/wiki/Berry_paradox
[2] https://terrytao.wordpress.com/2010/11/02/the-no-self-defeat...
The largest number representable in 64 bits - >>38414303 - Nov 2023 (105 comments)
The largest number representable in 64 bits - >>35677148 - April 2023 (42 comments)
(I haven't put 2023 in the current title since the article says it's been significantly expanded since then.)
David Metzler has this really cool playlist "Ridiculously Huge Numbers" that digs into the details in an accessible way:
https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3
By the end, you're thinking about functions that grow so fast TREE is utterly insignificant. Surprisingly, getting there just needs a small bit of machinery beyond Peano Arithmetic [0].
Then you can ponder doing all that but making a tiny tweak by replacing succesorship with BB. Holy cow...
[0]:https://en.wikipedia.org/wiki/Theories_of_iterated_inductive...
[1] https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3
This is certainly pragmatic, although it breaks the math
q type size q literal forms underlying integer value (encoding)
----------------------------------------------------------------------------------------------------------
short (h) 16-bit 0Nh / -0Wh / 0Wh null = -32768; -inf = -32767; +inf = 32767
int (i) 32-bit 0Ni / -0Wi / 0Wi null = -2147483648; -inf = -2147483647; +inf = 2147483647
long (j) 64-bit 0N (or 0Nj) / -0W (or -0Wj) / 0W (or 0Wj) null = -9223372036854775808; -inf = -9223372036854775807; +inf = 9223372036854775807
--2. Using a Turing machine to model a von Neumann machine looks exactly like a Rube Goldberg machine. It even resembles it [1].
3. There is no point in talking about a 64-bit limit when the underlying model requires an infinite amount of RAM (tape).
4. > A Rube Goldberg machine is one intentionally designed to perform a simple task in a comically overcomplicated way
People usually don't realize they've built a Rube Goldberg machine...
5. > Programs like Melo and w128
My point is that just as you pre-defined the program you're going to use, you can pre-define the largest integer. That's 1 bit of entropy. I was working on a project with custom 5-bit floating-point numbers implemented in hardware, and they had pre-defined parts of the mantissa. So the actual bits are just part of the information.
---
1. https://en.wikipedia.org/wiki/Turing_machine#/media/File:Tur...