zlacker

[return to "Show HN: WakaQ - a Python distributed task queue"]
1. osdev+He[view] [source] 2022-09-06 00:04:24
>>welder+(OP)
Nice work on this! I’ve looked into task queue libraries for Node and Java in the past. Yours looks straight-forward. A few questions:

1. What’s the error handling strategy for when a task/payload fails?

2. How exactly do delayed tasks work? For example, are you delaying execution until say 10mins later? How do you process delayed tasks in sequential timed order?

3. What kind of metrics/stats are available?

4. Is there a way pause and resume or is this the same as start and stop?

Congrats.

◧◩
2. welder+Hh[view] [source] 2022-09-06 00:31:02
>>osdev+He
1. That's the part I'm not happy with, but currently I use the `@wakaq.after_worker_started` decorator to setup an error logging handler in each worker. It outputs to a file that gets aggregated for error reporting, but without examples most people wouldn't know to do that. Here's the code: https://gist.github.com/alanhamlett/365d48276ac054ae75e59525...

2. Delayed tasks are added to a Redis sorted set, where the eta datetime when task should run is the score. Then the sorted set is queried for all items/tasks between zero and the current time. That returns eta tasks which are ready to run. Those tasks are added to their equivalent non-eta queue at the front and executed by the next worker. Eta tasks might not run right at the eta datetime, but they shouldn't run too early if all worker machines have clocks synced with time servers.

3. wakaq-info prints all queues with counts of pending tasks and pending eta tasks. Throughput isn't printed and has to be calculated by sampling wakaq-info at least twice. You can also import info from wakaq.utils to use it from a Python script.

4. No built-in way to pause. I pause by sending a broadcast task to all workers, which sets exclude_queues to all queues and then restarts the worker. Then it only listens for broadcast tasks and can be turned back on with another broadcast task.

> Congrats

Thanks!

[go to top]