zlacker

[parent] [thread] 10 comments
1. paxys+(OP)[view] [source] 2021-06-12 13:43:19
You are wayyy overestimating how complex something like RabbitMQ or Redis is. You don’t need to hire entire teams of engineers or go bankrupt setting up an instance. It is less work than implementing a production-level queue in Postgres for sure.
replies(4): >>zomgwa+T1 >>aether+52 >>fauige+n2 >>mbrees+z2
2. zomgwa+T1[view] [source] 2021-06-12 13:58:36
>>paxys+(OP)
I agree. I’ll also add that Redis is a level easier to operate than RabbitMQ.
3. aether+52[view] [source] 2021-06-12 13:59:35
>>paxys+(OP)
Have worked at multiple companies that successfully used redis or rabbit with teams of less than five engineers.

It's a little insane that a highly rated thread on HN is telling people to use postgres as their queuing solution. The world is wide, I'm sure that somewhere out there there is a situation where using postgres of a queue makes sense, but in 99% of all cases, this is a terrible idea.

Also, SQS and nsq are simple.

replies(2): >>politi+2p >>Gordon+Lr
4. fauige+n2[view] [source] 2021-06-12 14:02:40
>>paxys+(OP)
The issue is not necessarily the complexity of RabbitMQ or Redis. The complexity comes from having to manage another (stateful) process that has to be available at all times.
replies(1): >>hughrr+D3
5. mbrees+z2[view] [source] 2021-06-12 14:04:37
>>paxys+(OP)
“Production-level” means different things to different groups. Many (most?) groups don’t operate at the scale where they need a specialized queue in a dedicated message broker. Simple queues in a DB will work fine. Even if it isn’t very complex, why not already use the infrastructure you probably already have setup — an RDBMS?

Now, if you’re using a nosql approach for data storage, then you already know your answer.

replies(1): >>gravyp+74
◧◩
6. hughrr+D3[view] [source] [discussion] 2021-06-12 14:15:47
>>fauige+n2
Yes. Pay Amazon to do it and add use all that saved time to add business value instead.

They'll also manage the consumers of the queue and scale them too! Serverless is bliss.

◧◩
7. gravyp+74[view] [source] [discussion] 2021-06-12 14:21:45
>>mbrees+z2
My main concern would be monitoring. Most queue systems connect to alerting systems and can page you if you suddenly stop processing thongs or retrying the same query many many times. For a DB, since the scope of access is much larger, you don't get these sort of guarantees for access patterns and you essentially need to reinvent the wheel for monitoring.

All to save 2 to 3 hr of Googling for the best queue for your use case and finding s library for your language.

It makes sense if you don't care about reliability and just need something easy for many many people to deploy (ex: Drupal).

replies(1): >>bcrosb+pa
◧◩◪
8. bcrosb+pa[view] [source] [discussion] 2021-06-12 15:20:36
>>gravyp+74
We use pingdom that hits a page that gives it the health of various systems. Queue included.

> All to save 2 to 3 hr of Googling for the best queue for your use case and finding s library for your language.

The cost of using a new piece of tech in production is not just 2 or 3 hours.

replies(1): >>gravyp+vc
◧◩◪◨
9. gravyp+vc[view] [source] [discussion] 2021-06-12 15:43:23
>>bcrosb+pa
If you're at the scale where your postgres db can be used as a queue and no one on your team has experience running a these systems (most startups) then pretty much anything will work to begin with and as long as you have a clear interface that separates your code from your deps for the queue it'll be easy to swap out.

At $JOB-1 I wrote a `Queue` and `QueueWorker` abstraction that used an environment variable to switch between different queue backends while providing the same interface. Because of this I got everyone up and running with Redis lists as a queue backend and then could add in things like MQTT or RabbitMQ as things evolved. I also defined very loose constraints for the queue interface that made it so the implementer. Essentially there was a `push()` which added something into the queue or failed and returned an error. Then there was an `onWork()` which was called whenever there was work "at least once" meaning your system had to handle multiple instances being delivered the same work item. We would only ack the queue message after `onWork()` completed successfully.

There's not really anything preventing a team from doing this, putting a pin into it, and coming back to it when there's a scalability or reliability concern.

◧◩
10. politi+2p[view] [source] [discussion] 2021-06-12 17:28:39
>>aether+52
If your data storage pattern involves a database write followed by a message queue insert, then the ability to wrap those in a transaction can be a good trade-off to avoid consistency failures between the writes.

Avoid consuming that queue directly though -- which is probably what you're thinking when saying this is a dumb idea and I tend to agree. Typically, you want to have a worker that loads entries into a more optimal queue for your application.

Bottom line though, there is not a single best "queue" product. There are lots of queues offering wildly different semantics that directly impact use cases.

◧◩
11. Gordon+Lr[view] [source] [discussion] 2021-06-12 17:47:54
>>aether+52
For years I've been using RabbitMQ in small teams, and me as a one-man team too.

As long as you don't need clustering (even a single node can handle some pretty heavy load), it's actually really simple to setup and use - way easier than Postgres itself, for example.

My biggest beefs with it have historically been the Erlang-style config files (which are basically unreadable), and the ridiculous spiel of error messages you get if you have an invalid configuration. But thankfully RabbitMQ switched to a much simpler config file format one or two years back, and I understand the Erlang OTP is working on better error messages and stack traces.

[go to top]