First of all, every finite number is computable by definition.
And second, your encodings will, unlike those in the lambda calculus, be completely arbitrary.
PS: in my self-delimiting encoding of the lambda calculus, there are only 1058720968043859 < 2^50 closed lambda terms of size up to 64 [1].
You likely mean every integer or rational is computable (although not by definition). There are finite real numbers that are not computable, in fact most of them are not.
If you have the time to help me with my understanding then I would appreciate it.
I'm looking at wikipedia's formal definition, which says that for x to be computable, you need to provide a function from naturals to integers such that if you pick a denominator of a fraction (n), this function can give the numerator such that (f(n)-1)/n and (f(n)+1)/n end up straddling the value which is computable.
So, for an integer N, you make f(x) = xN then (f(n)-1)/n = N-(1/n) and (f(n)+1)/n = N+(1/n).
Therefore, for any integer N, it is computable.
Now, what is stopping me from doing something similar with a real?
If I say: f(x) = floor(xN)
Now (f(n)-1)/n = floor(n*N)-(1/n)
It is at this point where I realise I need to go to bed and sleep. If you see this and have the time to explain to me where it falls apart with reals, then I will be most happy. To be clear - I'm quite sure I am wrong, and this isn't me being passive aggressive about it.