zlacker

[return to "Transactionally Staged Job Drains in Postgres"]
1. brandu+K9[view] [source] 2017-09-20 16:07:31
>>johns+(OP)
(Author here.)

I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.

Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.

I'd be curious to hear what people think.

◧◩
2. koolba+cd[view] [source] 2017-09-20 16:23:47
>>brandu+K9
> I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient.

+1 to in database queues that are implemented correctly. The sanity of transactional consistency of enqueuing alone is worth it. I've used similar patterns as a staging area for many years.

This also allows for transactionally consistent error handling as well. If a job is repeatedly failing you can transactionally remove it from the main queue and add it to a dead letter queue.

[go to top]