zlacker

[return to "For algorithms, a little memory outweighs a lot of time"]
1. whatev+ti[view] [source] 2025-05-21 21:31:16
>>makira+(OP)
Lookup tables with precalculated things for the win!

In fact I don’t think we would need processors anymore if we were centrally storing all of the operations ever done in our processors.

Now fast retrieval is another problem for another thread.

◧◩
2. crmd+Jz[view] [source] 2025-05-22 00:23:09
>>whatev+ti
Reminds me of when I started working on storage systems as a young man and once suggested pre-computing every 4KB block once and just using pointers to the correct block as data is written, until someone pointed out that the number of unique 4KB blocks (2^32768) far exceeds the number of atoms in the universe.
◧◩◪
3. makman+gF[view] [source] 2025-05-22 01:30:53
>>crmd+Jz
In some contexts, dictionary encoding (which is what you're suggesting, approximately) can actually work great. For example common values or null values (which is a common type of common value). It's just less efficient to try to do it with /every/ block. You have to make it "worth it", which is a factor of the frequency of occurrence of the value. Shorter values give you a worse compression ratio on one hand, but on the other hand it's often likelier that you'll find it in the data so it makes up for it, to a point.

There are other similar lightweight encoding schemes like RLE and delta and frame of reference encoding which all are good for different data distributions.

[go to top]