zlacker

[parent] [thread] 3 comments
1. chucke+(OP)[view] [source] 2023-09-25 08:58:09
SQS limits you further in other ways. For instance, scheduled tasks are capped to 15m (delaySconds knob), so you'll be stuck when implementing the "cancel account if not verified in 7 days" workflow. You'll either reenqueue a message every 15m until its ready (and eat your the SQS costs), or build a bespoke solution only for scheduled tasks using some other store (the database usually) and another polling loop (at a fraction of the quality of any other OSS tool). This is a problem well solved by sidekiq, despite the other drawbacks you mention.

Bottom line, there is no silver bullet.

replies(2): >>mtlgui+S8 >>throwa+gE1
2. mtlgui+S8[view] [source] 2023-09-25 10:32:51
>>chucke+(OP)
If you wanted to handle this scenario with the serverless AWS stack, my recommendation would be to push records to Dynamo with TTLs and then when they pop have a Lambda push them onto the queue. Would cost almost nothing to do this. If you had 10 million requests a month your Lambda cost would be ~$150 to run this (depending on duration, but just pushing to a queue should be quick). Dynamo would be another ~$50 to run, depending how big your tasks are.

Granted now you need 3 services instead of 1. I personally don't find the maintenance cost particularly high for this architecture, but it does depend on what your team is comfortable with.

replies(1): >>phamil+z91
◧◩
3. phamil+z91[view] [source] [discussion] 2023-09-25 15:34:43
>>mtlgui+S8
I've explored this space pretty thoroughly, including the Dynamo approach you've described. Dynamo does not have a strict guarantee on when items get deleted:

  TTL typically deletes expired items within a few days. Depending on the size and activity level of a table, the actual delete operation of an expired item can vary. Because TTL is meant to be a background process, the nature of the capacity used to expire and delete items via TTL is variable (but free of charge). [0]
Because of that limitation, I would not use that approach. Instead I would do Scheduled Lambdas to check for items every 15 minutes in a Serverless Aurora and then add them to SQS with delays.

I've had my eye on this problem for a few years and keep thinking that a simple SaaS that does one-shot scheduled actions would probably be a worthy side project. Not enough to build a company around, but maintenance would be low and there's probably some pricing that would attract enough customers to be sustainable.

[0] https://docs.aws.amazon.com/amazondynamodb/latest/developerg...

4. throwa+gE1[view] [source] 2023-09-25 17:22:56
>>chucke+(OP)
You could probably use AWS EventBridge and schedule the message to be posted to SQS in 7 days.
[go to top]