zlacker

[return to "Do you really need Redis? How to get away with just PostgreSQL"]
1. _ugfj+z2[view] [source] 2021-06-12 07:29:54
>>hyzyla+(OP)
You really don't need anything fancy to implement a queue using SQL. You need a table with a primary id and a "status" field. An "expired" field can be used instead of the "status". We used the latter because it allows easy retries.

1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.

2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.

All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)

Source code at https://git.drupalcode.org/project/drupal/-/blob/9.2.x/core/... (We cooked this up for Drupal in 2009 but I am reasonably sure we didn't invent anything new.)

Of course, this is not the fastest job queue there is but it is quite often good enough.

◧◩
2. hughrr+gc[view] [source] 2021-06-12 09:18:43
>>_ugfj+z2
Oh please stop using databases as queues. I spent a disproportionate amount of time in the last 20 years undoing that decision. It doesn’t scale at all well.
◧◩◪
3. blowsk+gr[view] [source] 2021-06-12 12:16:41
>>hughrr+gc
You’re dealing with survivorship bias - those companies whose products were successful enough that they needed your services. You don’t speak to the teams who went bankrupt because they spent too much time fiddling with RabbitMQ, when Postgres would have been fine.

As always, there are trade offs, no silver bullets.

◧◩◪◨
4. skytre+gw[view] [source] 2021-06-12 13:10:49
>>blowsk+gr
> You’re dealing with survivorship bias - those companies whose products were successful enough that they needed your services

In fairness, Redis has only been available for 12 years so someone who has been in the industry longer has probably encountered systems using DBs as queues for no better reason than an alternative was not available when the system was made. (Rabbit just a couple years older I think.)

But in this day and age, you have better options to start with c'mon.

> they spent too much time fiddling with RabbitMQ, when Postgres would have been fine.

Rather ironic claim in a thread full of tips, tricks, and gotchas just to make PG behave like a queue no?

I can't speak for RabbitMQ personally but Redis has got to be among the simplest drop-in dependencies I've ever encountered. For a simple queue, the defaults do out-of-the-box.

Gut feel also says, Redis defaults for queuing would also serve more than PG defaults for queuing so you could leave it untouched for far longer to focus on your product, get to profitability, etc. Feel free to debunk with data.

◧◩◪◨⬒
5. derefr+9R[view] [source] 2021-06-12 16:31:49
>>skytre+gw
> Rather ironic claim in a thread full of tips, tricks, and gotchas just to make PG behave like a queue no?

There are libraries in many languages written specifically to accomplish the "make PG behave like a queue" part. All these people are explaining the principle of how it would work, but in practice, you can just pull in one of those libraries and be done. (Or, more accurately, you use an abstract external-job-queuing library and tell it to use its Postgres adapter, which in turn uses that library.)

In exchange, you get the ability to not have to set up / maintain any more infra than you already have. Which can be important if you're already "at scale", where every component you set up needs to be duplicated per geographic-region, clustered for High Availability, etc. If you've already done all that for Postgres, it'd be nice to not also have to do all that again, differently for Redis, RabbitMQ/Kafka/NATS, etc.

> Gut feel also says, Redis defaults for queuing would also serve more than PG defaults for queuing

Don't know about what you've got your queues doing, but our service only does (persistent, out-of-process) queuing for two purposes:

1. Global scheduled background-job queuing. (Think "cron jobs.") There are only a static number of these; they don't scale up with load.

2. Lifecycle email dispatch. This scales up with MAU, but with very very low coefficients.

For those cases, Postgres would totally be enough. (Anything would be enough!)

Most "queues" needed in the real world are requirements-light like this. It's a rare company whose product is doing something Apache BEAM-alike with its data, where a high-throughput reliable MQ is a fundamental part of the data workflow. (And those companies, I hope, know that they need an actual MQ, not a piece of software that does queuing in its spare time.)

Our company doesn't use Postgres queuing; we do, in fact, use Redis for it instead. But we only ended up doing that, because we already needed Redis for other stuff; and if you already have Redis in play (including an ops plan for scaling it), then it's the better tool for the job.

◧◩◪◨⬒⬓
6. skytre+v11[view] [source] 2021-06-12 17:52:24
>>derefr+9R
> In exchange, you get the ability to not have to set up / maintain any more infra than you already have.

And, you mean, a PGQ will not need any special monitoring other than the common DB metrics?

For instance, if I ever run a queue, it's just due ops diligence to know the length of the queue, average time an item spends in queue, throughput over time, among others. Are there standard monitoring modules that would check this for a PGQ? Because in exchange for setting up a proper queue cluster, compatibility and documentation for common stuff like this is also what you get.

The first one is particularly a sticking point for me. If you don't do it right, you will end up issuing a COUNT on a table periodically. You might say it's acceptable for most companies, because they don't need high-performance queues, but I specifically need my monitoring to be reliable during times of unexpected high load. Also, ideally, there is close to zero chance that my telemetry is the one that ends up causing me trouble.

Also, why does no one consider the added "surface for human error" that using a DB for a queue introduces. For instance anyone can drop in the DB and perform a query you were not supposed to. A malicious actor can update all outgoing emails in the queue to another recipient. If these are sensitive emails like password reset or OTP, good luck. A dedicated queue process does not allow such operations.

I have to say, it's very perplexing to me that people (especially in HN) would avoid using the right structure just because they have to invest a bit more work.

◧◩◪◨⬒⬓⬔
7. derefr+W21[view] [source] 2021-06-12 18:02:02
>>skytre+v11
> I have to say, it's very perplexing to me that people (especially in HN) would avoid using the right structure just because they have to invest a bit more work.

You're imagining this as if it's just the CapEx (the work of setting up Redis), but in reality it's the OpEx (the work of maintaining a Redis cluster) that kills you. Think of it in terms of the number of ops salaries you have to pay. At scale, this number goes up in proportion to the complexity of your stack.

◧◩◪◨⬒⬓⬔⧯
8. skytre+p51[view] [source] 2021-06-12 18:21:34
>>derefr+W21
Actually, if you read the rest of my previous reply (as opposed to an offhand comment at the end) you will see that I'm considering the logistics of operating an in-DB queue as well.

Using an in-DB queue doesn't give you zero OpEx does it? Maybe you can make the argument that it's cheaper but it's cheaper for a reason: the job wasn't done properly, so to speak.

Both options introduce new failure modes and op costs into your system. Might as well do it properly if (and this is a huge IF in my opinion) for slightly more cost. When you run into a failure it's standard, maybe the solution is even one Google search away as opposed to realizing, one hour into a late-night debugging session that, shit, that table should be treated as a queue!

◧◩◪◨⬒⬓⬔⧯▣
9. derefr+Jb1[view] [source] 2021-06-12 19:19:10
>>skytre+p51
I read your whole reply. I didn't see any mention of anything I'd consider an OpEx cost.

Re: Monitoring

Presuming you're already monitoring anything at all, adding monitoring for a new piece of infrastructure is a pure CapEx cost. You presumably already have monitoring infrastructure running. Its ingestion, by design, will be 99% idle — i.e. it won't need to be scaled horizontally proportionally to the number of components. The only thing needed, then, will be careful up-front design for the monitoring.

Which is usually also a cost already paid for you in advance when you use an abstract job-queue library. They all know how to expose Prometheus metrics endpoints, and they do the accounting to serve those endpoints efficiently (usually using process-local per-job-producer and per-job-consumer counters, which you must then roll up yourself at the PromQL level, taking the irate() to find spikes.)

Re: AAA

> Also, why does no one consider the added "surface for human error" that using a DB for a queue introduces.

Because the type of person who reaches for their DB first to solve a novel problem, is the same type of person who understands and makes use of their DB's security features, to the point that doing more DB-security-config isn't a complex one-off problem for them, but something they can whip off in a few minutes.

For a simple option, you can create two service-account DB roles: a queue_producer, and a queue_consumer. You can put all the job-queue implementation tables in a schema owned by the queue_producer; and then grant the queue_consumer SELECT privileges on all the tables, and UPDATE privileges on some of them. Then, nobody but the job_producer (or a DB superuser) can create or destroy jobs; and nobody but a job_producer, a job_consumer, or a superuser, can read or modify jobs. (Your job-queue abstraction library within your service usually maintains its own DB connection pool anyway, so it's no sweat to have those connections use their own credentials specific to their job-queue role.)

For a more complex — but perhaps useful? — option, the tables themselves can be "hidden" behind stored procedures (DDLed into existence by the abstract job-queue library), where nobody has any privileges (incl. SELECT) on the tables, only EXECUTE rights on the sprocs. And the sprocs are carefully designed to never do anything that could have an unbounded CPU time. Then anyone can "check up on" or even "insert into" the job-queue, but nobody can do anything "funny" to it. (My god, it's an encapsulated API!)

Once again — the libraries that abstract this away, already think about concerns like this, and choose one or the other of these options. That's why libraries like this exist, when the "core premise" is so simple: it's so there's a place to put all the fine details derived from man-years of thought on how to make this approach robust.

-----

On a completely different track, though: having a queue in the DB can sometimes be the optimal (i.e. the "if we had infinite man-hours for design + implementation") engineering decision. This case comes when the thing the queue is operating upon is data in the DB. In such a case, the DB data modification, and the job's completion, can succeed or fail together atomically, as part of a single DB transaction.

To accomplish the same thing when your queue lives outside the DB, you usually end up either needing some really gnarly distributed-locking logic that both your DB and your app layer need to know everything about (leaky abstraction!); or you need to completely remodel your data and your job queue into an event-streaming paradigm, so that you can "rewind" one side when the other side fails.

[go to top]