I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.
Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.
I'd be curious to hear what people think.
+1 to in database queues that are implemented correctly. The sanity of transactional consistency of enqueuing alone is worth it. I've used similar patterns as a staging area for many years.
This also allows for transactionally consistent error handling as well. If a job is repeatedly failing you can transactionally remove it from the main queue and add it to a dead letter queue.
Totally. This also leads to other operational tricks that you hope you never need, but are great the day you do. For example, a bad deploy queues a bunch of jobs with invalid arguments which will never succeed. You can open a transaction and go in and fix them in bulk using an `UPDATE` with jsonb select and manipulation operators. You can then even issue a `SELECT` to make sure that things look right before running `COMMIT`.
Again, something that you hope no one ever does in production, but a life saver in an emergency.