zlacker

[parent] [thread] 2 comments
1. macksd+(OP)[view] [source] 2021-12-18 00:34:27
I think the point of interest was 32 cores to handle what sounds like 10 messages per second at most. That's not really a ton of throughput... It's certainly a valid point that an awful lot of uses cases don't need Twitter-scale firehoses or Google-size Hadoop clusters.
replies(2): >>colinc+f >>rowanG+fW6
2. colinc+f[view] [source] 2021-12-18 00:36:23
>>macksd+(OP)
Ah, the database does a lot more than just pub/sub - especially since the high traffic pub/sub goes through redis. I guess my point was that we never regretted setting up postgres as the "default job queue" and it never required much engineering work to maintain.

For an example, it handles stripe webhooks when users change their pricing tier - if you drop that message, users would be paying for something they wouldn't receive.

3. rowanG+fW6[view] [source] 2021-12-20 16:38:48
>>macksd+(OP)
It said nothing about the distribution of traffic. It might well be thousands and thousands of pub sub messages at some point of the day and 0 for others.
[go to top]