The key retry problem is "What happens when a worker crashes?".
RabbitMQ solves this problem by tying "unacknowledged messages" to a tcp connection. If the connection dies, the in-flight messages are made available to other connections. This is a decent approach, but we hit a lot of issues with bugs in our code that would fail to acknowledge a message and the message would get stuck until that handler cycled. They've improved this over the past year or so with consumer timeouts, but we've already moved on.
The second problem we hit with RabbitMQ was that it uses one-erlang-process-per-queue and we found that big bursts of traffic could saturate a single CPU. There are ways to use sharded queues or re-architect to use dynamically created queues but the complexity led us towards SQS.
Sidekiq solves "What happens when a worker crashes?" by just not solving it. In the free version, those jobs are just lost. In Sidekiq Pro there are features that provide some guarantees that the jobs will not be lost, but no guarantees about when they will be processed (nor where they will be processed). Simply put, some worker sometime will see the orphaned job and decide to give it another shot. It's not super common, but it is worse in containerized environments where memory limits can trigger the OOM killer and cause a worker to die immediately.
The other issue with Sidekiq has been a general lack of hard constraints around resources. A single event thread in redis means that when things go sideways it breaks everything. We've had errant jobs enqueued with 100MB of json and seen it jam things up badly when Sidekiq tries to parse that with a lua script (on the event thread). While it's obvious that 100MB is too big to shove into a queue, mistakes happen and tools that limit the blast radius add a lot of value.
We've been leaning heavily on SQS the past few years and it is indeed Simple. It blocks us from doing even marginally dumb things (max message size of 256gb). The visibility timeout approach for handling crashing workers is easy to reason about. DLQ tooling has finally improved so you can redrive through standard aws tools. There are some gaps we struggle with (e.g. firing callbacks when a set of messages are fully processed) but sometimes simple tools force you to simplify things on your end and that ends up being a good thing.
I've been using Sidekiq for 11+ years in production and I've never seen this happen. Sidekiq (free version) has a very robust retry workflow. What are you talking about here?
Do you have any experience with NATS, and how would you compare it to RMQ/SQS?
The authors claim it guarantees exactly-once delivery with its JetStream component, and it looks very alluring from the documentation, but looks can be deceiving.
Bottom line, there is no silver bullet.
Granted now you need 3 services instead of 1. I personally don't find the maintenance cost particularly high for this architecture, but it does depend on what your team is comfortable with.
With the paid features to keep it from getting dropped things still can be painful. We have a lot of different workers, all with different concurrency settings and resource limits. A memory heavy worker might need a few GB of memory and be capped at concurrency of 2 while a lightweight worker might only need 512MB and have concurrency of 20. If the big memory worker crashes, its jobs might get picked up by the lightweight worker (and possibly hours later), which will then OOM and all its 19 other in flight jobs all end up in the orphanage. And now your alerts are going off saying are saying the lightweight worker is OOMing and your team is scratching their heads because that doesn't make any sense. It just gets messy.
Sidekiq probably works great outside of containerized environment. Many swear to me they've never encountered any of these problems. And maybe we should be questioning the containerization rather than sidekiq, but ultimately our operations have been much simpler as we've moved off of sidekiq.
TTL typically deletes expired items within a few days. Depending on the size and activity level of a table, the actual delete operation of an expired item can vary. Because TTL is meant to be a background process, the nature of the capacity used to expire and delete items via TTL is variable (but free of charge). [0]
Because of that limitation, I would not use that approach. Instead I would do Scheduled Lambdas to check for items every 15 minutes in a Serverless Aurora and then add them to SQS with delays.I've had my eye on this problem for a few years and keep thinking that a simple SaaS that does one-shot scheduled actions would probably be a worthy side project. Not enough to build a company around, but maintenance would be low and there's probably some pricing that would attract enough customers to be sustainable.
[0] https://docs.aws.amazon.com/amazondynamodb/latest/developerg...
I find this definition has morphed from one meaningful to developers into one queue implementations like to claim. I've learned this generally means "multiple inserts will be deduped into only one message in the queue".
The only guarantee this `exactly-once` delivery provides is that I won't have two workers given the exact same job. Which is a nice guarantee, but I still have to decide on my processing behavior and am faced with the classic "at most once or at least once" dilemma around partially failed jobs. If I'm building my system to be idempotent so I can safely retry partially failed messages it doesn't do much for me.