zlacker

[return to "Transactionally Staged Job Drains in Postgres"]
1. brandu+K9[view] [source] 2017-09-20 16:07:31
>>johns+(OP)
(Author here.)

I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.

Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.

I'd be curious to hear what people think.

◧◩
2. geeio+is[view] [source] 2017-09-20 17:53:26
>>brandu+K9
I do the same thing. Small projects start with the job queue in postgres.

As things eventually scale up, I move the queue to its own dedicated postgres node.

Once that starts to be too slow, I finally move to redis/kafka. 99% of things never make it to this stage.

[go to top]