zlacker

[return to "For algorithms, a little memory outweighs a lot of time"]
1. LPisGo+Y9[view] [source] 2025-05-21 20:27:55
>>makira+(OP)
I think it is very intuitive that more space beats the pants off of more time.

In time O(n) you can use O(n) cells on a tape, but there are O(2^n) possible configurations of symbols on a tape of length n (for an alphabet with 2 symbols), so you can do so much more with n space than with n time.

◧◩
2. frollo+8q[view] [source] 2025-05-21 22:35:11
>>LPisGo+Y9
Also, the O(1) random memory access assumption makes it easy to take memory for granted. Really it's something like O(n^(1/3)) when you're scaling the computer to the size of the problem, and you can see this in practice in datacenters.

I forget the name of the O(1) access model. Not UMA, something else.

◧◩◪
3. Legion+At[view] [source] 2025-05-21 23:10:58
>>frollo+8q
On the other hand, actual computers can work in parallel when you scale the hardware, something that the TM formulation doesn't cover. It can be interesting which algorithms work well with lots of computing power subject to data locality. (Brains being the classic example of this.)
◧◩◪◨
4. LPisGo+mx[view] [source] 2025-05-21 23:48:18
>>Legion+At
Multitape TMs are pretty well studied
[go to top]