zlacker

[return to "For algorithms, a little memory outweighs a lot of time"]
1. LPisGo+Y9[view] [source] 2025-05-21 20:27:55
>>makira+(OP)
I think it is very intuitive that more space beats the pants off of more time.

In time O(n) you can use O(n) cells on a tape, but there are O(2^n) possible configurations of symbols on a tape of length n (for an alphabet with 2 symbols), so you can do so much more with n space than with n time.

◧◩
2. frollo+8q[view] [source] 2025-05-21 22:35:11
>>LPisGo+Y9
Also, the O(1) random memory access assumption makes it easy to take memory for granted. Really it's something like O(n^(1/3)) when you're scaling the computer to the size of the problem, and you can see this in practice in datacenters.

I forget the name of the O(1) access model. Not UMA, something else.

◧◩◪
3. cperci+3t[view] [source] 2025-05-21 23:05:55
>>frollo+8q
O(n^(1/2)) really, since data centers are 2 dimensional, not 3 dimensional.

(Quite aside from the practical "we build on the surface of the earth" consideration, heat dissipation considerations limit you to a 2 dimensional circuit in 3-space.)

◧◩◪◨
4. mpotea+Fz[view] [source] 2025-05-22 00:23:01
>>cperci+3t
More fundamentally O(n^(1/2)) due to the holographic principle which states that the maximal amount of information encodable in a given region of space scales wrt its surface area, rather than its volume.

(Even more aside to your practical heat dissipation constraint)

[go to top]