zlacker

[return to "For algorithms, a little memory outweighs a lot of time"]
1. whatev+ti[view] [source] 2025-05-21 21:31:16
>>makira+(OP)
Lookup tables with precalculated things for the win!

In fact I don’t think we would need processors anymore if we were centrally storing all of the operations ever done in our processors.

Now fast retrieval is another problem for another thread.

◧◩
2. crmd+Jz[view] [source] 2025-05-22 00:23:09
>>whatev+ti
Reminds me of when I started working on storage systems as a young man and once suggested pre-computing every 4KB block once and just using pointers to the correct block as data is written, until someone pointed out that the number of unique 4KB blocks (2^32768) far exceeds the number of atoms in the universe.
◧◩◪
3. ww520+ND[view] [source] 2025-05-22 01:13:20
>>crmd+Jz
The idea is not too far off. You could compute a hash on an existing data block. Store the hash and data block mapping. Now you can use the hash in anywhere that data block resides, i.e. any duplicate data blocks can use the same hash. That's how storage deduplication works in the nutshell.
◧◩◪◨
4. valent+8E[view] [source] 2025-05-22 01:18:45
>>ww520+ND
Except that there are collisions...
◧◩◪◨⬒
5. datame+lF[view] [source] 2025-05-22 01:32:20
>>valent+8E
This might be completely naive but can a reversible time component be incorporated into distinguishing two hash calculations? Meaning when unpacked/extrapolated it is a unique signifier but when decomposed it folds back into the standard calculation - is this feasible?
◧◩◪◨⬒⬓
6. shakna+9Z[view] [source] 2025-05-22 05:54:16
>>datame+lF
Some hashes do have verification bits, that are used not just to verify intact hash, but one "identical" hash from another. However, they do tend to be slower hashes.
[go to top]