zlacker

[parent] [thread] 13 comments
1. andrel+(OP)[view] [source] 2023-09-24 20:48:04
I guess you update it with the assigned worker id, where the "taken by" field is currently null? Does it mean that workers have persistent identities, something like an index? How do you deal with workers being replaced, scaled down, etc?

Just curious. We maintained a custom background processing system for years but recently replaced it with off the shelf stuff, so I'm really interested in how others are doing similar stuff.

replies(2): >>matsem+L >>calrai+ng
2. matsem+L[view] [source] 2023-09-24 20:52:21
>>andrel+(OP)
No, just update set taken=1. If it was a change to the row, you updated it. If it wasn't, someone updated before you.

Our tasks were quick enough so that all fetched tasks would always be able to be completed before a scale down / new deploy etc, but we stopped fetching new ones when the signal came so it just finished what it had. I updated above, we did have logic to monitor if a task got taken but never got a finished status, but I can't remember it ever actually reporting on anything.

replies(3): >>fbdab1+f1 >>fsnipe+q3 >>SahAss+k4
◧◩
3. fbdab1+f1[view] [source] [discussion] 2023-09-24 20:56:15
>>matsem+L
I would set the taken field to a timestamp. Then you could have a cleanup job that looks for any lingering jobs aged past a reasonable timeout and null out the field.
replies(3): >>tylerg+72 >>Izkata+7D >>magica+b11
◧◩◪
4. tylerg+72[view] [source] [discussion] 2023-09-24 21:02:35
>>fbdab1+f1
it wont work with a timestamp because each write will have an affected row of 1 beacuse the writes happen at different times. setting a boolean is static
replies(3): >>jayd16+E3 >>twic+q8 >>AdamJa+Jk
◧◩
5. fsnipe+q3[view] [source] [discussion] 2023-09-24 21:12:53
>>matsem+L
You can combine this "update" with a "where taken = 0" to directly skip taken rows.
◧◩◪◨
6. jayd16+E3[view] [source] [discussion] 2023-09-24 21:15:05
>>tylerg+72
You can do something like UPDATE row SET timeout = NOW() WHERE NOW() - taskTimeout > row.timestamp. You're not stuck with comparing bools.
◧◩
7. SahAss+k4[view] [source] [discussion] 2023-09-24 21:20:06
>>matsem+L
That is the sort of thing that bites you hard when it bites. It might run perfectly for years but that one period of flappy downtime at a third party or slightly misconfigured DNS will bite you hard.
replies(1): >>matsem+J7
◧◩◪
8. matsem+J7[view] [source] [discussion] 2023-09-24 21:48:54
>>SahAss+k4
But compared to our rabbit setup where I work now, it was dead stable. No losing tasks or extra engineering effort on maintaining yet another piece of tech. Our rabbit cluster acting up has led to multiple disasters lately.
replies(1): >>SahAss+z9
◧◩◪◨
9. twic+q8[view] [source] [discussion] 2023-09-24 21:54:36
>>tylerg+72
update tasks set taken_timestamp = now() where task_id = ? and taken_timestamp is null
◧◩◪◨
10. SahAss+z9[view] [source] [discussion] 2023-09-24 22:05:58
>>matsem+J7
Agreed, I've had my own rabbit nightmares. But setting up a more robust queue on postgresql is easy, so you can easily gain a lot more guarantees without more complexity.
11. calrai+ng[view] [source] 2023-09-24 23:08:04
>>andrel+(OP)
I've done this successfully with a web service front that retrieves jobs to send to workers for processing, by using a SQL table queue. That web service ran without a hitch for a long time, serving about 10 to 50 job consumers for fast and highly concurrent queues.

My approach was:

- Accept the inbound call

- Generate a 20 character random string (used as a signature)

- Execute a sql query that selects the oldest job without a signature and write the signature, return the primary key of the job that was updated.

- If it errors for any reason, loop back and attempt again, but only 10 times, as some underlying issue exists (10 collisions is statistically improbable for my use case)

- Read the primary key returned by that sql query and read it, comparing it's signature to my random one.

- If a hit, return the job to the caller

- If a miss, loop back and start again, incrementing attempts by 1.

The caller has to handle the possibility that a call to this web service won't return anything, either due to no jobs existing, or the collision/error threshold being reached.

In either case, the caller backs for it's configured time, then calls again.

Callers are usually in 'while true' loops, only existing if they get an external signal to close or an uncontrolled crash.

If you take this approach, you will have a function or a web service that converts the SQL table into a job queue service. When you do that, you can build metrics on the amount of collisions you get while trying to pull and assign jobs to workers.

I had inbuilt processes that would sweep through jobs that were assigned (had a job signature) and weren't marked as complete, it actioned those to handle the condition of a crashed worker.

There are many many other services the proper job queues offer, but that usually means more dependencies, and code libraries / containers, so just build in the functionality you need.

If it is accurate, fast enough, and stable, you've got the best solution for you.

/edited for formatting

◧◩◪◨
12. AdamJa+Jk[view] [source] [discussion] 2023-09-25 00:01:48
>>tylerg+72
update row set taken=true,taken_by=my_id,taken_at=now() where taken is false;
◧◩◪
13. Izkata+7D[view] [source] [discussion] 2023-09-25 03:40:24
>>fbdab1+f1
We do it with two columns, one is an integer identifying which process took the job and the second is the timestamp for when it was taken.
◧◩◪
14. magica+b11[view] [source] [discussion] 2023-09-25 08:38:57
>>fbdab1+f1
We have a "status flag" column which is either Available, Locked or Processed (A, L and P), an Updated column with a timestamp of when it was last updated, and a Version counter.

When grabbing a new message it selects "Available or (Locked with Updated timestamp older than configured timeout)". If successful it immediately tries to set the Locked status, Updated timestamp and bumps the Version counter, where the previous values of Status and Version has to match. If the update fails it retries getting a new message.

If the Version counter is too high, it moves the message to the associated dead-letter table, and retries getting a new message.

This isn't for high performance. I tested it and got 1000 messages/sec throughput with handful of producers and consumers against test db instance (limited hardware), which would be plenty for us.

I wrote it to be simple and so we could easily move to something AMPQ'ish like RabbitMQ or Azure Service Bus when needed. Overall quite easy to implement and has served us well so far.

[go to top]