zlacker

[return to "Choose Postgres queue technology"]
1. Ozzie_+5q[view] [source] 2023-09-25 00:29:30
>>bo0tzz+(OP)
One thing I love about Kafka is... It's just an append-only log, and a client is essentially just holding an offset. This is conceptually very simple to reason about. It's also persistent and pretty fault-tolerant (you can just go back and read any offset).

Unfortunately, Kafka carries with it enough complexity (due to the distributed nature) that it ends up not being worth it for most use-cases.

Personally I'd love something similar that's easier to operate. You'd probably be able to handle hundreds (if not thousands) of events per second on a single node, and without distributed complexity it'd be really nice.

And yes, in theory you could still use postgres for this (and just never delete rows). And maybe that's the answer.

◧◩
2. valzam+iq[view] [source] 2023-09-25 00:32:41
>>Ozzie_+5q
Considering that you have a native "offset" (auto incrementing id) and the ability to partition by date I would say postgres is a great candidate for a simple Kafka replacement. It will also be significantly simpler to set up consumers if you don't really need to whole consumer group, partition etc. functionality.
[go to top]