zlacker

[return to "Postgres is a great pub/sub and job server (2019)"]
1. colinc+be[view] [source] 2021-12-18 00:19:11
>>anonu+(OP)
Author here! A few updates since this was published two years ago:

- The service mentioned (now called https://webapp.io ) eventually made it into YC (S20) and still uses postgres as its pub/sub implementation, doing hundreds of thousands of messages per day. The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.

- We bolstered Postgres's PUBLISH with Redis pub/sub for high traffic code paths, but it's been nice having ACID guarantees as the default for less popular paths (e.g., webhook handling)

- This pattern only ever caused one operational incident, where a transaction held a lock which caused the notification queue to start growing, and eventually (silently) stop sending messages, starting postgres with statement_timeout=(a few days) was enough to solve this

Previous discussion: https://news.ycombinator.com/item?id=21484215

Happy to answer any questions!

◧◩
2. boomsk+Ye[view] [source] 2021-12-18 00:24:46
>>colinc+be
> doing hundreds of thousands of messages per day

> The postgres instance now runs on 32 cores and 128gb of memory and has scaled well.

Am I the only one?

◧◩◪
3. colinc+Uf[view] [source] 2021-12-18 00:31:49
>>boomsk+Ye
Such a server is 400$/mo, a backend developer that can confidently maintain kafka in production is significantly more expensive!
◧◩◪◨
4. rockwo+vk[view] [source] 2021-12-18 01:09:49
>>colinc+Uf
Fwiw I don't know the shape of the data, but I feel like you could do this with Firebase for a few bucks a month...
◧◩◪◨⬒
5. daenz+yp[view] [source] 2021-12-18 01:49:36
>>rockwo+vk
you 100% could, and this thread feels like the twilight zone with how many people are advocating for using a rdbms for (what seems like) most peoples queuing needs.
◧◩◪◨⬒⬓
6. tomc19+1F[view] [source] 2021-12-18 04:31:45
>>daenz+yp
Dude you are seriously underestimating postgres' versatility. It does so many different things, and well!
◧◩◪◨⬒⬓⬔
7. daenz+oI[view] [source] 2021-12-18 05:16:46
>>tomc19+1F
I'm not underestimating anything. I am advocating for the right tool for the job. I have a hard time believing, despite the skewed sample size in this thread, that most people think using postgres as a message queue for most cases makes the most sense.
◧◩◪◨⬒⬓⬔⧯
8. tomc19+YK[view] [source] 2021-12-18 05:46:29
>>daenz+oI
What is your idea of 'most cases'?

I've personally written real-time back-of-house order-tracking with rails and postgres pubsub (no redis!), and wrote a record synchronization queuing system with a table and some clever lock semantics that has been running in production for several years now -- which marketing relies upon as it oversees 10+ figures of yearly topline revenue.

Neither of those projects were FAANG scale, but they work fine for what is needed and scale relatively cleanly with postgres itself.

Besides, in a lot of environments corporate will only approve the use of certain tools. And if you already have one approved that does the job, then why not?

◧◩◪◨⬒⬓⬔⧯▣
9. daenz+BP[view] [source] 2021-12-18 06:48:38
>>tomc19+YK
>some clever lock semantics

Most senior+ engineers that I know would hear that and recoil. Getting "clever" with concurrency handling in your home-rolled queuing system is not something that coworkers, especially more senior coworkers, will appreciate inheriting, adapting, and maintaining. Believe me.

I get that you're trying to flex some cool thing that you built, but it doesn't really have any bearing on the concept of "most cases" because it's an anecdote. Queuing systems are a thing for a reason, and in most cases, using them makes more sense than writing your own.

◧◩◪◨⬒⬓⬔⧯▣▦
10. pritam+AV[view] [source] 2021-12-18 08:01:46
>>daenz+BP
> Most senior+ engineers that I know would hear that and recoil. Getting "clever" with concurrency handling in your home-rolled queuing system is not something that coworkers, especially more senior coworkers, will appreciate inheriting, adapting, and maintaining. Believe me.

I am both a "senior+ engineer" that has inherited such systems and an author of such systems. I think you're overreacting.

Concurrency Control (i.e., "lock semantics") exists for a reason: correctness. Using it for its designed purpose is not horror. Yes, like any tool, you need to use it correctly. But you don't just throw away correctness because you don't want to learn how to use the right tool properly.

I have inherited poorly designed concurrency systems (in the database); yes, I recoiled in horror and did not appreciate it. So you know what I did? I fixed the design, and documented it to show others how to do it correctly.

I have also inherited OOB "Queuing Systems" that could not possibly be correct because they weren't integrated into the DB's built-in and already-used correctness system: Transactions and Concurrency Control. Those were always more horrific than poorly-implemeneted in-DB solutions. Integrating two disparate stores is always more trouble than just fixing one single source.

----

> I get that you're trying to flex some cool thing that you built, but it doesn't really have any bearing on the concept of "most cases" because it's an anecdote. Queuing systems are a thing for a reason, and in most cases, using them makes more sense than writing your own.

I get that you're trying to flex that you use turnkey Queueing Systems, but it doesn't really have any bearing on the concept of "most cases", because all you've presented are assertions without backing. Queuing systems are good, for a specific kind of job, but when you need relational logic you better use one that supports it. And despite what MongoDB and the NoSQL crowd has been screaming hoarsely for the past decade, in most cases, you have relational logic.

[go to top]