I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
(The old excuse was that networks are serial but they haven't been serial for decades.)
the same and even more confusion is engendered when talking about "fifths" etc.
RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).
We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
56000 BPS was the bitrate you could get out of a DS0 channel, which is the digital version of a normal phone line. A DS0 is actually 64000 BPS, but 1 bit out of 8 is "robbed" for overhead/signalling. An analog phone lined got sampled to 56000 BPS, but lines were very noisy, which was fine for voice, but not data.
7 bits per sample * 8000 samples per second = 56000, not 57600. That was theoretical maximum bandwidth! The FCC also capped modems at 53K or something, so you couldn't even get 56000, not even on a good day.
There are probably cases where corresponding to Hz, is useful, but for most users I think 119MiB/s is more useful than 1Gbit/s.
The decimal-vs-binary discrepancy is used more as slack space to cope with the inconvenience of having to erase whole 16MB blocks at a time while allowing the host to send write commands as small as 512 bytes. Given the limited number of program/erase cycles that any flash memory cell can withstand, and the enormous performance penalty that would result from doing 16MB read-modify-write cycles for any smaller host writes, you need way more spare area than just a small multiple of the erase block size. A small portion of the spare area is also necessary to store the logical to physical address mappings, typically on the order of 1GB per 1TB when tracking allocations at 4kB granularity.
- magnetic media
- optical media
- radio waves
- time
There's good reasons for having power-of-2 sectors (they need to get loaded into RAM), but there's really no compelling reason to have a power-of-2 number of sectors. If you can fit 397 sectors, only putting in 256 is wasteful.
The choice would be effectively arbitrary, the number of actual bits or bytes is the same regardless of the multiplier that you use. But since it's for a computer, it makes sense to use units that are comparable (e.g. RAM and HD).