zlacker

[parent] [thread] 3 comments
1. kpmah+(OP)[view] [source] 2021-12-18 11:30:27
To spell out good reasons for doing this:

If you're already using Postgres, you can avoid increasing operational complexity by introducing another database. Less operational complexity means better availability.

You can atomically modify jobs and the rest of your database. For example, you can atomically create a row and create a job to do processing on it.

replies(2): >>svdr+u4 >>AtlasB+2A
2. svdr+u4[view] [source] 2021-12-18 12:26:24
>>kpmah+(OP)
Avoiding increasing operational complexity is really important, but for pub/sub we are using Redis. While this does add complexity, it is very little, because it is incredibly easy to install and maintain.
replies(1): >>kpmah+Ja
◧◩
3. kpmah+Ja[view] [source] [discussion] 2021-12-18 13:34:11
>>svdr+u4
Obviously you're in a better position to evaluate the trade-offs for your application than I am, so I'm not saying your decision is wrong, but this can potentially decrease availability if your application depends on both PostgreSQL AND Redis to be available to function.
4. AtlasB+2A[view] [source] 2021-12-18 16:46:38
>>kpmah+(OP)
But one of the golden rules of databases is to not use them as queues/integration.

Granted I didn't even read the main article because it seems like such a casual headline.

Edit post-read: yeah, using it as a CI jobs database. He lists the alternatives, but seriously, Kafka? Kafka is for linear scaling pub/sub. This guy has a couple CI jobs infrequently run.

Sure this works if the entire thing is throwaway for a non critical pub/sub system.

"It's possible to scale Postgres to storing a billion 1KB rows entirely in memory - This means you could quickly run queries against the full name of everyone on the planet on commodity hardware and with little fine-tuning."

Yeah just because it can does not mean it is suited for this purpose.

Don't do this for any integration at even medium scale.

[go to top]