zlacker

[return to "River: A fast, robust job queue for Go and Postgres"]
1. hipade+Xb[view] [source] 2023-11-20 16:56:41
>>bo0tzz+(OP)
What a strange design. If a job is dependent on an extant transaction then perhaps the job should run in the same code that initiated the transaction instead of a outside job queue?

Also you pass the data a job needs to run as part of the job payload. Then you don't have the "data doesn't exist" issue.

◧◩
2. brandu+Id[view] [source] 2023-11-20 17:02:48
>>hipade+Xb
Author here.

Wanting to offload heavy work to a background job is absolute as old of a best practice as exists in modern software engineering.

This is especially important for the kind of API and/or web development that a large number of people on this site are involved in. By offloading expensive work, you take that work out-of-band of the request that generated it, making that request faster and providing a far superior user experience.

Example: User sign-up where you want to send a verification email. Talking to a foreign API like Mailgun might be a 100 ms to multisecond (worst case scenario) operation — why make the user wait on that? Instead, send it to the background, and give them a tight < 100 ms sign up experience that's so fast that for all intents and purposes, it feels instant.

◧◩◪
3. hipade+3i[view] [source] 2023-11-20 17:17:18
>>brandu+Id
> Wanting to offload heavy work to a background job is absolute as old of a best practice as exists in modern software engineering.

Yes. I am intimately familiar with background jobs. In fact I've been using them long enough to know, without hesitation, that you don't use a relational database as your job queue.

◧◩◪◨
4. qaq+nl[view] [source] 2023-11-20 17:26:30
>>hipade+3i
Postgres based job queues work fine if you have say 10K transaction per second and jobs on average do not take significant time to complete (things will run fine on fairly modest instance). They also give guarantees that traditional job queues do not.
◧◩◪◨⬒
5. Rapzid+5S2[view] [source] 2023-11-21 08:04:01
>>qaq+nl
Probably order of magnitude more or perhaps a multiple of that depending on the hardware and design.

In theory an append-only and/or HOT strategy leaning on Postgres just ripping through moderate sized in-mem lists could be incredibly fast. Design would be more complicated and perhaps use case dependent but I bet could be done.

[go to top]