In distributed systems, there’s a common understanding that
it is not possible to guarantee exactly-once delivery of
messages.
This is not only a common understanding, it is a provably correct axiom. For a detailed discussion regarding the concepts involved, see the "two general's problem"[0].To guarantee exactly once processing requires a Single Point of Truth (SPoT) enforcing uniqueness shared by all consumers, such as a transactional persistent store. Any independently derived or generated "idempotency keys" cannot provide the same guarantee.
The author goes on to discuss using the PostgreSQL transaction log to create "idempotency keys", which is a specialization of the aforementioned SPoT approach. A more performant variation of this approach is the "hi/low" algorithm[1], which can reduce SPoT allocation of a unique "hi value" to 1 in 2,147,483,648 times when both are 32-bit signed integers having only positive values.
Still and all, none of the above establishes logical message uniqueness. This is a trait of the problem domain, in that whether two or more messages having the same content are considered distinct (thus mandating different "idempotentcy keys") or duplicates (thus mandating identical "idempotency keys").
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
Isn't that the situation inside a CPU across its multiple cores? Data is replicated (into caches) in a distributed system of transactions, because each core uses its own L2 cache with which it interacts, and has to be sent back to main memory for consistence. Works amazing.
Another even more complex system: a multi CPU motherboard supporting NUMA access: 2 CPUs coordinate their multiple cores to send over RAM from the other CPU. I have one of these "distributed systems" at home, works amazing.