zlacker

[return to "Transactionally Staged Job Drains in Postgres"]
1. bgentr+Kf[view] [source] 2017-09-20 16:38:37
>>johns+(OP)
This pattern would basically be a clean migration away from a pure Postgres queue if either table bloat or locking becomes a performance problem. You maintain the benefits of transactional job enqueueing while only slightly worsening edge cases that could cause jobs to be run multiple times.

Just be sure to run your enqueueing process as a singleton, or each worker would be redundantly enqueueing lots of jobs. This can be guarded with a session advisory lock or a redis lock.

Knowing that this easy transition exists makes me even more confident in just using Que and not adding another service dependency (like Redis) until it’s really needed.

◧◩
2. Deimor+QQ[view] [source] 2017-09-20 20:39:37
>>bgentr+Kf
> Just be sure to run your enqueueing process as a singleton, or each worker would be redundantly enqueueing lots of jobs. This can be guarded with a session advisory lock or a redis lock.

If you're using PostgreSQL 9.5+ you can also use the new SKIP LOCKED capability, which is perfect for this sort of usage: https://blog.2ndquadrant.com/what-is-select-skip-locked-for-...

[go to top]