zlacker

[parent] [thread] 1 comments
1. koolba+(OP)[view] [source] 2017-09-20 16:23:47
> I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient.

+1 to in database queues that are implemented correctly. The sanity of transactional consistency of enqueuing alone is worth it. I've used similar patterns as a staging area for many years.

This also allows for transactionally consistent error handling as well. If a job is repeatedly failing you can transactionally remove it from the main queue and add it to a dead letter queue.

replies(1): >>brandu+d2
2. brandu+d2[view] [source] 2017-09-20 16:36:32
>>koolba+(OP)
> This also allows for transactionally consistent error handling as well. If a job is repeatedly failing you can transactionally remove it from the main queue and add it to a dead letter queue.

Totally. This also leads to other operational tricks that you hope you never need, but are great the day you do. For example, a bad deploy queues a bunch of jobs with invalid arguments which will never succeed. You can open a transaction and go in and fix them in bulk using an `UPDATE` with jsonb select and manipulation operators. You can then even issue a `SELECT` to make sure that things look right before running `COMMIT`.

Again, something that you hope no one ever does in production, but a life saver in an emergency.

[go to top]