- The service mentioned (now called https://webapp.io ) eventually made it into YC (S20) and still uses postgres as its pub/sub implementation, doing hundreds of thousands of messages per day. The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.
- We bolstered Postgres's PUBLISH with Redis pub/sub for high traffic code paths, but it's been nice having ACID guarantees as the default for less popular paths (e.g., webhook handling)
- This pattern only ever caused one operational incident, where a transaction held a lock which caused the notification queue to start growing, and eventually (silently) stop sending messages, starting postgres with statement_timeout=(a few days) was enough to solve this
Previous discussion: https://news.ycombinator.com/item?id=21484215
Happy to answer any questions!
> The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.
Am I the only one?
And if your needs are simpler like in this case then there are dozens of smaller pub/sub/queue systems that you could compare this to.
Kafka does more for streaming data, but doesn't do squat for relational data. You always need a database, but you sometimes can get by without a queuing system.