I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.
Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.
I'd be curious to hear what people think.
Except, we use postresql's LISTEN/NOTIFY which amazingly is also transaction aware (NOTIFY does not happen until the transaction is committed, and even more amazingly, sorta de-dupes itself!) to move the jobs from the database to the message queue.
This way we never lock the queue table. We run a simple go program on the postgresql server that LISTEN's, queries, pushes to REDIS and then deletes with a .25 second delay so we group the IO instead of processing each row individually.
This also allowed us to create jobs via INSERT FROM SELECT, which is awesome when your creating 50k jobs at a time.