zlacker

[return to "Do you really need Redis? How to get away with just PostgreSQL"]
1. _ugfj+z2[view] [source] 2021-06-12 07:29:54
>>hyzyla+(OP)
You really don't need anything fancy to implement a queue using SQL. You need a table with a primary id and a "status" field. An "expired" field can be used instead of the "status". We used the latter because it allows easy retries.

1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.

2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.

All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)

Source code at https://git.drupalcode.org/project/drupal/-/blob/9.2.x/core/... (We cooked this up for Drupal in 2009 but I am reasonably sure we didn't invent anything new.)

Of course, this is not the fastest job queue there is but it is quite often good enough.

◧◩
2. soroko+Lo[view] [source] 2021-06-12 11:41:59
>>_ugfj+z2
What happens if the process that performed 2. crashes before it was able to complete whatever processing it was supposed to do?
◧◩◪
3. jffry+ps[view] [source] 2021-06-12 12:31:17
>>soroko+Lo
The idea, I think, is you wouldn't delete the job from the queue until the processing was done.

Of course, this relies on the jobs being something that can be retried.

◧◩◪◨
4. shawnz+ZF[view] [source] 2021-06-12 14:47:00
>>jffry+ps
But you will have marked it as in progress by setting the "expire" to a non-zero value, preventing any other workers from trying to work on it. How will they know that the worker which marked it actually crashed and will never finish?

By using SELECT ... FOR UPDATE SKIP LOCKED, the record will automatically get unlocked if the worker crashes.

[go to top]