zlacker

[parent] [thread] 35 comments
1. crmd+(OP)[view] [source] 2025-05-22 00:23:09
Reminds me of when I started working on storage systems as a young man and once suggested pre-computing every 4KB block once and just using pointers to the correct block as data is written, until someone pointed out that the number of unique 4KB blocks (2^32768) far exceeds the number of atoms in the universe.
replies(9): >>ww520+44 >>makman+x5 >>manwe1+G9 >>jodrel+U9 >>nine_k+Zi >>whatev+mk >>Traube+5r >>benchl+U91 >>Neverm+oJ1
2. ww520+44[view] [source] 2025-05-22 01:13:20
>>crmd+(OP)
The idea is not too far off. You could compute a hash on an existing data block. Store the hash and data block mapping. Now you can use the hash in anywhere that data block resides, i.e. any duplicate data blocks can use the same hash. That's how storage deduplication works in the nutshell.
replies(1): >>valent+p4
◧◩
3. valent+p4[view] [source] [discussion] 2025-05-22 01:18:45
>>ww520+44
Except that there are collisions...
replies(2): >>datame+C5 >>ww520+4y1
4. makman+x5[view] [source] 2025-05-22 01:30:53
>>crmd+(OP)
In some contexts, dictionary encoding (which is what you're suggesting, approximately) can actually work great. For example common values or null values (which is a common type of common value). It's just less efficient to try to do it with /every/ block. You have to make it "worth it", which is a factor of the frequency of occurrence of the value. Shorter values give you a worse compression ratio on one hand, but on the other hand it's often likelier that you'll find it in the data so it makes up for it, to a point.

There are other similar lightweight encoding schemes like RLE and delta and frame of reference encoding which all are good for different data distributions.

◧◩◪
5. datame+C5[view] [source] [discussion] 2025-05-22 01:32:20
>>valent+p4
This might be completely naive but can a reversible time component be incorporated into distinguishing two hash calculations? Meaning when unpacked/extrapolated it is a unique signifier but when decomposed it folds back into the standard calculation - is this feasible?
replies(2): >>ruined+Wo >>shakna+qp
6. manwe1+G9[view] [source] 2025-05-22 02:21:41
>>crmd+(OP)
It seems like you weren’t really that far off from implementing it, you just need a 4 KB pointer to point to the right block. And in fact, that is what all storage systems do!
7. jodrel+U9[view] [source] 2025-05-22 02:22:43
>>crmd+(OP)
Reminds me of when I imagined brute-forcing every possible small picture as simply 256 shades of gray for each pixel x (640 x 480 = 307200 pixels) = 78 million possible pictures.

Actually I don't have any intuition for why that's wrong, except that if we catenate the rows into one long row then the picture can be considered as a number 307200 digits long in base 256, and then I see that it could represent 256^307200 possible different values. Which is a lot: https://www.wolframalpha.com/input?i=256%5E307200

replies(4): >>p1neco+Tc >>deadfo+Br >>plasti+pJ >>danwil+QJ
◧◩
8. p1neco+Tc[view] [source] [discussion] 2025-05-22 03:01:10
>>jodrel+U9
78 million is how many pixels would be in 256 different pictures with 307200 pixels each. You're only counting each pixel once for each possible value, but you actually need to count each possible value on each pixel once per possible combinations of all of the other pixels.

The number of possible pictures is indeed 256^307200, which is an unfathomably larger number than 78 million. (256 possible values for the first pixel * 256 possible values for the second pixel * 256 possi...).

9. nine_k+Zi[view] [source] 2025-05-22 04:19:02
>>crmd+(OP)
If some blocks are highly repetitive, this may make sense.

It's basically how deduplication works in ZFS. And that's why it only makes sense when you store a lot of repetitive data, e.g. VM images.

10. whatev+mk[view] [source] 2025-05-22 04:36:21
>>crmd+(OP)
We know for a fact that when we disable the cache of the processors their performance plummets, so the question is how much of computation is brand new computation (never seen before)?
replies(1): >>vlovic+bq
◧◩◪◨
11. ruined+Wo[view] [source] [discussion] 2025-05-22 05:47:36
>>datame+C5
hashes by definition are not reversible. you could store a timestamp together with a hash, and/or you could include a timestamp in the digested content, but the timestamp can’t be part of the hash.
replies(2): >>RetroT+z91 >>datame+8I6
◧◩◪◨
12. shakna+qp[view] [source] [discussion] 2025-05-22 05:54:16
>>datame+C5
Some hashes do have verification bits, that are used not just to verify intact hash, but one "identical" hash from another. However, they do tend to be slower hashes.
replies(1): >>grumbe+br
◧◩
13. vlovic+bq[view] [source] [discussion] 2025-05-22 06:04:20
>>whatev+mk
While true, a small technical nitpick is that the cache also contains data that’s previously been loaded and reused, not just as a result of a previous computation (eg your executable program itself or a file being processed are examples)
14. Traube+5r[view] [source] 2025-05-22 06:14:28
>>crmd+(OP)
you might be interested in pifs

https://github.com/philipl/pifs

◧◩◪◨⬒
15. grumbe+br[view] [source] [discussion] 2025-05-22 06:15:24
>>shakna+qp
Do you have an example? That just sounds like a hash that is a few bits longer.
replies(1): >>shakna+ds
◧◩
16. deadfo+Br[view] [source] [discussion] 2025-05-22 06:20:29
>>jodrel+U9
i think at some point you should have realized that there are obviously more than 78 million possible greyscale 640x480 pictures. theres a lot of intuitive examples but just think of this:

https://images.lsnglobal.com/ZFSJiK61WTql9okXV1N5XyGtCEc=/fi...

if there were only 78 million possible pictures, how could that portrait be so recongizably one specific person? wouldnt that mean that your entire picture space wouldnt even be able to fit a single portrait of everyone in Germany?

replies(1): >>jodrel+d31
◧◩◪◨⬒⬓
17. shakna+ds[view] [source] [discussion] 2025-05-22 06:27:14
>>grumbe+br
Mostly use of GCM (Galois/Counter Mode). Usually you tag the key, but you can also tag the value to check verification of collisions instead.

But as I said, slow.

◧◩
18. plasti+pJ[view] [source] [discussion] 2025-05-22 09:34:30
>>jodrel+U9
I had friend who had the same idea to do it for pixel fonts with only two colors and 16x16 canvas. It was still 2^256. Watching that thing run and trying to estimate when it would finish made me understand encryption.
◧◩
19. danwil+QJ[view] [source] [discussion] 2025-05-22 09:40:29
>>jodrel+U9
Yeah I had a similar thought back in the 90s and made a program to iterate through all possible images at a fairly low res, I left it running while I was at school and got home after many hours to find it had hardly got past the first row of pixels! This was a huge eye-opener about how big a possibility-space digital images really exist in!
replies(1): >>mystif+KD1
◧◩◪
20. jodrel+d31[view] [source] [discussion] 2025-05-22 12:57:58
>>deadfo+Br
"At some point" I do realise it. What I don't have is an intuitive feel for why a number can be three digits 000 to 999 and each place has ten choices, but it's not 10 x 3 possibles. I tried to ask ChatGPT to give me an intuition for it, but all it does is go into an explanation of combinations. I know it's 10 x 10 x 10 meaning 10^3 I don't need that explanation again, what I'm looking for is an intuition for why it isn't 10x3.

> "if there were only 78 million possible pictures, how could that portrait be so recongizably one specific person? wouldnt that mean that your entire picture space wouldnt even be able to fit a single portrait of everyone in Germany?"

It's not intuitive that "a 640x480 computer picture must be able to fit a single portrait of everyone in Germany"; A human couldn't check it, a human couldn't remember 78 million distinct pictures, look through them, and see that they all look sufficiently distinct and at no point is it representing 50k people with one picture; human attention and memory isn't enough for that.

replies(1): >>Thomas+qi1
◧◩◪◨⬒
21. RetroT+z91[view] [source] [discussion] 2025-05-22 13:44:16
>>ruined+Wo
> hashes by definition are not reversible.

Sure they are. You could generate every possible input, compute hash & compare with a given one.

Ok it might take infinite amount of compute (time/energy). But that's just a technicality, right?

replies(1): >>dagw+2a1
22. benchl+U91[view] [source] 2025-05-22 13:46:27
>>crmd+(OP)
The other problem is that (if we take literally the absurd proposal of computing "every possible block" up front) you're not actually saving any space by doing this, since your "pointers" would be the same size as the blocks they point to.
replies(1): >>lesuor+3E1
◧◩◪◨⬒⬓
23. dagw+2a1[view] [source] [discussion] 2025-05-22 13:47:43
>>RetroT+z91
Sure they are. You could generate every possible input

Depends entirely on what you mean by reversible. For every hash value, there are an infinite number of inputs that give that value. So while it is certainly possible to find some input that hashes to a given value, you cannot know which input I originally hashed to get that that value.

◧◩◪◨
24. Thomas+qi1[view] [source] [discussion] 2025-05-22 14:39:42
>>jodrel+d31
Try starting with a 2x2, then 3x3, etc. image and manually list all the possibilities.
replies(1): >>jodrel+ZT2
◧◩◪
25. ww520+4y1[view] [source] [discussion] 2025-05-22 16:16:24
>>valent+p4
Can use cryptographic hashing.
replies(1): >>anonym+7H1
◧◩◪
26. mystif+KD1[view] [source] [discussion] 2025-05-22 16:47:01
>>danwil+QJ
I has the same idea when I first learned about programming as a teenager. I wonder how many young programmers have had this exact same train of thought?
◧◩
27. lesuor+3E1[view] [source] [discussion] 2025-05-22 16:48:19
>>benchl+U91
If you don't do _actually_ every single block then you have Huffman Coding [1].

I imagine if you have a good idea of the data incoming you could probably do a similar encoding scheme where you use 7 bits to point to a ~512 bit blob and the 8th bit means the next 512 couldn't be compressed.

[1]: https://en.wikipedia.org/wiki/Huffman_coding

◧◩◪◨
28. anonym+7H1[view] [source] [discussion] 2025-05-22 17:02:11
>>ww520+4y1
How does that get around the pigeonhole principle?

I think you'd have to compare the data value before purging, and you can only do the deduplication (purge) if the block is actually the same, otherwise you have to keep the block (you can't replace it with the hash because the hash link in the pool points to different data)

replies(1): >>ww520+4V2
29. Neverm+oJ1[view] [source] 2025-05-22 17:15:12
>>crmd+(OP)
The other problem is to address all possible 4098 byte blocks, you need a 4098 byte address. I suppose we would expect the actual number of blocks computed and reused to be a sparse subset.

Alternately, have you considered 8 byte blocks?

If your block pointers are 8-byte addresses, you don't need to count on block sparsity, in fact, you don't even need to have the actual blocks.

A pointer type, that implements self-read and writes, with null allocations and deletes, is easy to implement incredibly efficiently in any decent type system. A true zero-cost abstraction, if I have ever seen one!

(On a more serious note, a memory heap and CPU that cooperated to interpret pointers with the top bit set, as a 63-bit linear-access/write self-storage "pointer", is an interesting thought.

◧◩◪◨⬒
30. jodrel+ZT2[view] [source] [discussion] 2025-05-23 00:51:32
>>Thomas+qi1
That's focusing on the wrong thing; as I said, "I know it's 10 x 10 x 10 meaning 10^3 I don't need that explanation [for the correct combinations], what I'm looking for is an intuition for why it isn't 10x3".
replies(1): >>cwmoor+K09
◧◩◪◨⬒
31. ww520+4V2[view] [source] [discussion] 2025-05-23 01:00:15
>>anonym+7H1
The hash collision chance is extremely low.
replies(1): >>valent+3X2
◧◩◪◨⬒⬓
32. valent+3X2[view] [source] [discussion] 2025-05-23 01:21:47
>>ww520+4V2
For small amounts of data yeah. With growing data, the chance of a collision grows more than proportional. So in the context of working on storage systems (like s3 or so) that won't work unless customers actually accept the risk of a collission as okay. So for example, when storing media data (movies, photos), I could imagine that, but not for data in general.
replies(1): >>ww520+t13
◧◩◪◨⬒⬓⬔
33. ww520+t13[view] [source] [discussion] 2025-05-23 02:07:27
>>valent+3X2
Cryptographic hashing collisions are very very small, like end of universe in numerous times small. They're smaller than AWS being burnt down and all backups were lost leading to data loss.
replies(1): >>valent+pe3
◧◩◪◨⬒⬓⬔⧯
34. valent+pe3[view] [source] [discussion] 2025-05-23 04:59:03
>>ww520+t13
You have a point.

When using MD5 (128bit) then when AWS S3 would apply this technique, it would only get a handful of collisions. Using 256bit would drive that down to a level where any collision is very unlikely.

This would be worth it if a 4kb block is, on average, duplicated with a chance of at least 6.25%. (not considering overhead of data-structures etc.)

◧◩◪◨⬒
35. datame+8I6[view] [source] [discussion] 2025-05-24 19:45:59
>>ruined+Wo
Oh, of course, the timestamp could instead be metadata!
◧◩◪◨⬒⬓
36. cwmoor+K09[view] [source] [discussion] 2025-05-25 23:17:59
>>jodrel+ZT2
ChatGPT might be able to explain combinatorics if you use the keyterm.

I’m fond of derangements and their relationship with permutations, which contain a factor of e.

[go to top]