zlacker

[return to "Postgres is a great pub/sub and job server (2019)"]
1. colinc+be[view] [source] 2021-12-18 00:19:11
>>anonu+(OP)
Author here! A few updates since this was published two years ago:

- The service mentioned (now called https://webapp.io ) eventually made it into YC (S20) and still uses postgres as its pub/sub implementation, doing hundreds of thousands of messages per day. The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.

- We bolstered Postgres's PUBLISH with Redis pub/sub for high traffic code paths, but it's been nice having ACID guarantees as the default for less popular paths (e.g., webhook handling)

- This pattern only ever caused one operational incident, where a transaction held a lock which caused the notification queue to start growing, and eventually (silently) stop sending messages, starting postgres with statement_timeout=(a few days) was enough to solve this

Previous discussion: https://news.ycombinator.com/item?id=21484215

Happy to answer any questions!

◧◩
2. boomsk+Ye[view] [source] 2021-12-18 00:24:46
>>colinc+be
> doing hundreds of thousands of messages per day

> The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.

Am I the only one?

◧◩◪
3. colinc+Uf[view] [source] 2021-12-18 00:31:49
>>boomsk+Ye
Such a server is 400$/mo, a backend developer that can confidently maintain kafka in production is significantly more expensive!
◧◩◪◨
4. macksd+hg[view] [source] 2021-12-18 00:34:27
>>colinc+Uf
I think the point of interest was 32 cores to handle what sounds like 10 messages per second at most. That's not really a ton of throughput... It's certainly a valid point that an awful lot of uses cases don't need Twitter-scale firehoses or Google-size Hadoop clusters.
◧◩◪◨⬒
5. colinc+wg[view] [source] 2021-12-18 00:36:23
>>macksd+hg
Ah, the database does a lot more than just pub/sub - especially since the high traffic pub/sub goes through redis. I guess my point was that we never regretted setting up postgres as the "default job queue" and it never required much engineering work to maintain.

For an example, it handles stripe webhooks when users change their pricing tier - if you drop that message, users would be paying for something they wouldn't receive.

[go to top]