I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.
Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.
I'd be curious to hear what people think.
> Sidekiq.enqueue(job.job_name, *job.job_args)
You're doing all your enqueueing in a transaction, so if any enqueue call fails (e.g. network error) you'll break the transaction, and requeue all of your jobs (even those that were successfully delivered).
Given that you're lobbing the job outside of the DB transaction boundary, why have that transaction at all? It's not clear to me why all the jobs should share the same fate.
If you want at-least once message delivery, can't you configure that in your queue? (For example, RabbitMQ supports both modes; only ACKing after the task completes, or ACKing as soon as the worker dequeues the message).
I'm not familiar with Sidekiq, so maybe that's not an option there. But in that case, it's still not clear why you'd requeue all the tasks if one of them fails to be enqueued (or the delte fails); you could just decline to delete the row for the individual job that failed.