zlacker

For algorithms, a little memory outweighs a lot of time

submitted by makira+(OP) on 2025-05-21 19:34:20 | 343 points 139 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. cperci+r9[view] [source] 2025-05-21 20:24:10
>>makira+(OP)
Minus the fuzz: A multitape Turing machine running in time t can be simulated using O(sqrt(t log t)) space (and typically more than t time).

https://arxiv.org/abs/2502.17779

29. feline+vz[view] [source] 2025-05-22 00:19:19
>>makira+(OP)
Ryan Williams lecture (how he started): https://www.youtube.com/live/0DrFB2Cp7tg

And paper: https://people.csail.mit.edu/rrw/time-vs-space.pdf

◧◩◪
42. jodrel+DJ[view] [source] [discussion] 2025-05-22 02:22:43
>>crmd+Jz
Reminds me of when I imagined brute-forcing every possible small picture as simply 256 shades of gray for each pixel x (640 x 480 = 307200 pixels) = 78 million possible pictures.

Actually I don't have any intuition for why that's wrong, except that if we catenate the rows into one long row then the picture can be considered as a number 307200 digits long in base 256, and then I see that it could represent 256^307200 possible different values. Which is a lot: https://www.wolframalpha.com/input?i=256%5E307200

◧◩◪
58. Traube+O01[view] [source] [discussion] 2025-05-22 06:14:28
>>crmd+Jz
you might be interested in pifs

https://github.com/philipl/pifs

◧◩◪◨
60. deadfo+k11[view] [source] [discussion] 2025-05-22 06:20:29
>>jodrel+DJ
i think at some point you should have realized that there are obviously more than 78 million possible greyscale 640x480 pictures. theres a lot of intuitive examples but just think of this:

https://images.lsnglobal.com/ZFSJiK61WTql9okXV1N5XyGtCEc=/fi...

if there were only 78 million possible pictures, how could that portrait be so recongizably one specific person? wouldnt that mean that your entire picture space wouldnt even be able to fit a single portrait of everyone in Germany?

◧◩
64. handsc+7b1[view] [source] [discussion] 2025-05-22 08:14:34
>>whatev+ti
https://conwaylife.com/wiki/HashLife is an algorithm for doing basically this in Conway’s Game of Life, which is Turing complete. I remember my first impression being complete confusion: here’s a tick-by-tick simulation too varied and complex to encapsulate in a formula, and you’re telling me I can just skip way into its future?
◧◩
65. awande+Ub1[view] [source] [discussion] 2025-05-22 08:20:57
>>ziofil+r21
You mean something like this https://en.wikipedia.org/wiki/Bremermann%27s_limit or this https://en.wikipedia.org/wiki/Quantum_speed_limit?
76. ChrisM+Dk1[view] [source] 2025-05-22 09:55:54
>>makira+(OP)
Interesting. It's one of those things that I’ve always “just assumed,” without thinking about it.

I did a lot of raster graphics programming, in my career, and graphics work makes heavy use of lookup tables.

Yesterday, I posted a rather simple tool I wrote[0]: a server that “frontloads” a set of polygons into a database, and then uses them, at query time. It’s fairly fast (but I’m sure it could be a lot faster). I wrote it in a few hours, and got pretty good performance, right out of the starting gate.

Pretty basic stuff. I doubt the pattern is unique, but it’s one that I’ve used for ages. It’s where I try to do as much “upfront” work as possible, and store “half-baked” results into memory.

Like I said, I always “just worked that way,” and never really thought about it. There’s been a lot of “rule of thumb” stuff in my work. Didn’t have an MIT professor to teach it, but it’s sort of gratifying to see that it wasn’t just “old wives” stuff.

There’s probably a ton of stuff that we do, every day, that we don’t think about. Some of it almost certainly originated from really smart folks like him, finding the best way (like the “winding number” algorithm, in that server[1]), and some of it also probably comes from “grug-brained programmers,” simply doing what makes sense.

[0] >>44046227

[1] https://github.com/LittleGreenViper/LGV_TZ_Lookup/blob/e247f...

◧◩◪◨
106. lesuor+Md2[view] [source] [discussion] 2025-05-22 16:48:19
>>benchl+DJ1
If you don't do _actually_ every single block then you have Huffman Coding [1].

I imagine if you have a good idea of the data incoming you could probably do a similar encoding scheme where you use 7 bits to point to a ~512 bit blob and the 8th bit means the next 512 couldn't be compressed.

[1]: https://en.wikipedia.org/wiki/Huffman_coding

◧◩
114. godels+E23[view] [source] [discussion] 2025-05-22 21:08:50
>>woolio+Gg1
I think they used to be better but really have made a blatant turn. I really thought that wormhole fiasco would have killed them. To go 4 whole months before putting the editor's note is beyond egregious[0]. Mistakes happen, but 4 months kills all credibility. You have to act fast on those things! There were big names raising serious concerns on day 1 and it really shows they didn't do due diligence to get outside verification before running a piece that they knew would be really popular.

All this accomplishes is discrediting science. Trading personal gains for eroding the very thing that they make their money off of. This is a major part of why Americans (and people) have such high distrust for science. News outlets, and in particular science focused news outlets, constantly spew inaccurate information. It really should be no wonder that so many people are confused about so many scientific topics, as unless they actually take the years it takes to become an expert in a field, they are going to have a difficult time distinguishing fact from fiction. And why shouldn't the average person expect to trust a source like Quanta? They're "full of experts", right? smh

[0] This is the earliest archive I see with the note. Press back one day and it should not be there. Article was published on Nov 30 2022, along with a youtube video https://web.archive.org/web/20230329191417/https://www.quant...

◧◩◪
115. godels+p43[view] [source] [discussion] 2025-05-22 21:17:06
>>simpat+S62
Quanta targets people who are above average. So I don't think it is too much for them to give a sentence or two stating that. Or even a little graphic could do wonders. I don't think it would take much time or effort to make a graphic like the one on wikipedia[0] and just throw in some equations within the ring. You can easily simplify too, by removing NL and merging EXP. Hell, look at the graphics here[1]. That's much more work.

I don't think Quanta should be afraid of showing math to people. That's really their whole purpose. Even if I think they've made some egregious mistakes that make them untrustable...[2]

[0] https://en.wikipedia.org/wiki/PSPACE#/media/File:Complexity_...

[1] https://www.quantamagazine.org/june-huh-high-school-dropout-...

[2] >>44067043

◧◩◪◨
117. simpat+ja3[view] [source] [discussion] 2025-05-22 21:51:56
>>godels+p43
I suppose my point is that the readers who will wonder about this are a) very likely to know about complexity classes already, or b)capable of learning about it themselves. Perhaps a simple link to something like https://complexityzoo.net/Petting_Zoo would have been a nice middle-ground.

Edit: Aaronson even mentions the n^100 problem in the section about P!

◧◩
130. bbruba+4a5[view] [source] [discussion] 2025-05-23 18:25:08
>>woolio+Gg1
I'm the author of this article. If you ask a complexity theorist, they will tell you that they did in fact have a general intuition that certain problems require space close to to linear in time to solve (see e.g., Ryan's comment #22 on Scott Aaronson's blog post about the result: https://scottaaronson.blog/?p=8680, and the comments after that). The most intuitive way to see this is in a circuit/DAG picture, where the goal is to get from the input nodes of the graph to the output nodes. Some graphs are very "wide": cut the graph at some intermediate point, and there will be a lot of edges crossing the cut, each of which represents some information from an earlier stage in the computation that you'll need to remember to get to the output. Ryan's first result is a general-purpose method for doing any computation, even ones whose graph structure looks like this, in asymptotically far less space. That is precisely what makes the result so surprising!

My article was quite explicit in multiple places that the universal/comprehensive character of the result was that counterintuitive part:

- In the first paragraph: "memory was more powerful than computer scientists believed: A small amount would be as helpful as a lot of time in all conceivable computations."

- Further down in the introduction, in the passage you quoted: "Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their runtime, and researchers had long assumed there’s no way to do better. Williams’ proof established a mathematical procedure for transforming any algorithm — no matter what it does — into a form that uses much less space.

- In the third section, I explicitly state that researchers do believe space is more powerful than time in the specific sense that you're criticizing my article for misrepresenting: "But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren’t in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn’t as forgiving — once it passes, you can’t get it back."

- In the fourth section, I explain why researchers didn't think the HPV75 result could be improved further, despite their intuition that space is more powerful than time in the above sense: "While many problems can be solved with much less space than time, some intuitively seemed like they’d need nearly as much space as time."

TCS (and complexity theory specifically) are complicated subjects. I spend a lot of time interviewing researchers and thinking about how to distill the results of my reporting into a form that is accessible to readers with widely varying levels of familiarity with the subject matter. You are of course well within your rights to critique my stylistic choices, the narrative aspects of the story, and the order in which I presented information, but I will push back against the claim that my article is spreading misinformation about complexity theory. You're referring to a misconception that arises, by your own admission, when you don't read carefully. If it's the headline you object to, you could lodge a similar complaint against the complexity theorist Lance Fortnow: https://blog.computationalcomplexity.org/2025/02/you-need-mu....

[go to top]