zlacker

[return to "For algorithms, a little memory outweighs a lot of time"]
1. whatev+ti[view] [source] 2025-05-21 21:31:16
>>makira+(OP)
Lookup tables with precalculated things for the win!

In fact I don’t think we would need processors anymore if we were centrally storing all of the operations ever done in our processors.

Now fast retrieval is another problem for another thread.

◧◩
2. crmd+Jz[view] [source] 2025-05-22 00:23:09
>>whatev+ti
Reminds me of when I started working on storage systems as a young man and once suggested pre-computing every 4KB block once and just using pointers to the correct block as data is written, until someone pointed out that the number of unique 4KB blocks (2^32768) far exceeds the number of atoms in the universe.
◧◩◪
3. Neverm+7j2[view] [source] 2025-05-22 17:15:12
>>crmd+Jz
The other problem is to address all possible 4098 byte blocks, you need a 4098 byte address. I suppose we would expect the actual number of blocks computed and reused to be a sparse subset.

Alternately, have you considered 8 byte blocks?

If your block pointers are 8-byte addresses, you don't need to count on block sparsity, in fact, you don't even need to have the actual blocks.

A pointer type, that implements self-read and writes, with null allocations and deletes, is easy to implement incredibly efficiently in any decent type system. A true zero-cost abstraction, if I have ever seen one!

(On a more serious note, a memory heap and CPU that cooperated to interpret pointers with the top bit set, as a 63-bit linear-access/write self-storage "pointer", is an interesting thought.

[go to top]