zlacker

[return to "River: A fast, robust job queue for Go and Postgres"]
1. hipade+Xb[view] [source] 2023-11-20 16:56:41
>>bo0tzz+(OP)
What a strange design. If a job is dependent on an extant transaction then perhaps the job should run in the same code that initiated the transaction instead of a outside job queue?

Also you pass the data a job needs to run as part of the job payload. Then you don't have the "data doesn't exist" issue.

◧◩
2. terafl+2g[view] [source] 2023-11-20 17:10:16
>>hipade+Xb
It's not strange at all to me. The job is "transactional" in the sense that it depends on the transaction, and should be triggered iff the transaction commits. That doesn't mean it should run inside the transaction (especially since long-running transactions are terrible for performance).

Passing around the job's data separately means that now you're storing two copies, which means you're creating a point where things can get out of sync.

◧◩◪
3. hipade+nj[view] [source] 2023-11-20 17:20:47
>>terafl+2g
> should be triggered iff the transaction commits

Agreed. Which is why the design doesn't make any sense. Because in the scenario presented they're starting a job during a transaction.

◧◩◪◨
4. eximiu+DE[view] [source] 2023-11-20 18:33:48
>>hipade+nj
That part is somewhat poorly explained. That is a motivating example of why having your job queue system be separate from your system of record can be bad.

e.g.,

1. Application starts transaction 2. Application updates DB state (business details) 3. Application enqueues job in Redis 4. Redis jobworkers pick up job 5. Redis jobworkers error out 6. Application commits transaction

This motivates placing the jobworker state in the same transaction whereas non-DB based job queues have issues like this.

[go to top]