Transactional job queues have been a recurring theme throughout my career as a backend and distributed systems engineer at Heroku, Opendoor, and Mux. Despite the problems with non-transactional queues being well understood I keep encountering these same problems. I wrote a bit about them here in our docs: https://riverqueue.com/docs/transactional-enqueueing
Ultimately I want to help engineers be able to focus their time on building a reliable product, not chasing down distributed systems edge cases. I think most people underestimate just how far you can get with this model—most systems will never outgrow the scaling constraints and the rest are generally better off not worrying about these problems until they truly need to.
Please check out the website and docs for more info. We have a lot more coming but first we want to iron out the API design with the community and get some feedback on what features people are most excited for. https://riverqueue.com/
> We're hard at work on more advanced features including a self-hosted web interface. Sign up to get updates on our progress.
Thank you for this work, I look forward to taking it for a (real) test drive!
It's particularly suited to use cases such background jobs, workflows or other operations which occur within your application and scales well enough for what 99.9999% of us will be doing.
Along the lines of:
_, err := river.Execute(context.Background(), j) // Enqueue the job, and wait for completion
if err != nil {
log.Fatalf("Unable to execute job: %s", err)
}
log.Printf("Job completed")
Does that make sense?I plan to use Orleans to handle a lot of the heavy HA/scale lifting. It can likely stand in for Redis in a lot of cache use cases(in some non-obvious ways), and am anticipating writing a Postgres stream provider for it when the time comes.. Will likely end up writing a Postres job queue as well so will definitely check out River for inspiration.
A lot of postgres drivers, including the .Net defacto Npgsql, support logical decode these days which unlocks a ton of exciting use cases and patterns via log processing.
It's probably more correct to say that the engineering effort required to make a Postgres-as-a-queue scale horizontally is a lot more than the engineering effort required to make a dedicated queueing service scale horizontally. The trade-off is that you're typically going to have to start scaling horizontally much sooner with your dedicated queuing service than with a Postgres database.
The argument for Postgres-as-a-queue is that you might be able to get to market much quicker, which can be significantly more important than how well you can scale down the track.