zlacker

[return to "Choose Postgres queue technology"]
1. vhirem+4o[view] [source] 2023-09-25 00:03:08
>>bo0tzz+(OP)
We used postgres for some of our queues back when we were at ~10 msg/s. It scaled quite a bit, but, honestly, setting up SQS or some other queue stack in AWS, GCP, or Azure is so simple and purpose built for the task (with DL queues and the like built in), I don’t know why you wouldn’t just go that route and not have to worry about that system shitting the bed and affecting the rest of the DB’s health.

It seems foolish. I am a big fan of “use the dumbest tool”, but sometimes engineers take it too far and you’re left with the dumbest tool with caveats that don’t seem worth it given the mainstream alternative is relatively cheap and simple.

◧◩
2. rtpg+Lp[view] [source] 2023-09-25 00:24:38
>>vhirem+4o
What I've settled on is "store most job state in the DB, use task queues just to poke workers into working on the jobs".

Storing the job state in the DB means you can query state nicely. It's not going to exactly show the state of things but it's helpful for working through a live incident (especially when most job queues just delete records as work is processed).

And if you make all the background tasks idempotent anyways then you're almost always safe with running a thing like "send a job to the task queue to handle this job".

If you rely _just_ on message queues, there are a lot of times where you can have performance issues, yet have a lot of trouble knowing what's going on (for example, rabbitMQ might tell you the size of your queues, but offer little-to-no inspection of the data inside them).

[go to top]