zlacker

[return to "On SQS"]
1. hexene+ci[view] [source] 2019-05-27 10:39:30
>>mpweih+(OP)
One downside of SQS is that it doesn't support fan-out, for eg. S3->SQS->multiple consumers. The recommendation instead seems to be to first push to SNS, and then hookup SQS/other consumers to it. Kinesis/Kafka would appear to be better suited for this (since they support fan-out like SNS and are pull-based like SQS), but aren't as well supported as SNS/SQS (you can't push S3 events directly to Kinesis for eg.) Can someone from AWS comment on why that is? Also, related: when can we expect GA for Kafka (MSK)?
◧◩
2. static+hu[view] [source] 2019-05-27 12:55:00
>>hexene+ci
I do S3 -> SNS -> SQS. I don't see why I would use Kinesis instead. The SNS bit is totally invisible to the consumers (you can even tell SNS not to wrap the inner message with the SNS boilerplate), downstream consumers just know they have to listen to a queue.

I don't see a downside to this approach. Perhaps some increased latency?

◧◩◪
3. hexene+zz[view] [source] 2019-05-27 13:38:19
>>static+hu
If you wanted multiple pull-based consumers for the stream, wouldn't you need a separate SQS queue per consumer, with each queue hooked up to SNS? Perhaps I'm mistaken, but that seems brittle to me. With Kinesis/Kafka, you only need to register a new appName/consumer group on the single queue for fan-out. Plus, both are FIFO by default, at least within a partition.
◧◩◪◨
4. static+EC[view] [source] 2019-05-27 14:01:22
>>hexene+zz
That's exactly how you do it. To me, it's the opposite of brittle - every consumer owns a queue, and is isolated from all other consumers. Clients are totally unaware of other systems, and there's no shared resource under contention.
◧◩◪◨⬒
5. skybri+jI[view] [source] 2019-05-27 14:59:03
>>static+EC
Hmm. It seems a bit awkward if you have a variable number of consumers?
◧◩◪◨⬒⬓
6. static+0L[view] [source] 2019-05-27 15:26:54
>>skybri+jI
I haven't run into that myself, when would you want a variable number of consumers? Usually the way I have it is that a service, which is itself a cluster of processes, owns one queue. For example, an AWS Lambda triggered by that queue.

Then any new lambdas or other services that want to subscribe to messages will have another queue, and another, etc.

I haven't had a case where I had service groups coming up and down, I'm struggling to think of a use case.

[go to top]