zlacker

[return to "On SQS"]
1. Jemacl+OX[view] [source] 2019-05-27 17:05:22
>>mpweih+(OP)
We love SQS, but one of the problems we're running into lately is the 256kb per message limitation. We do tens of millions of messages per day, with a small percentage of those reaching the 256kb limit. We're approaching the point where most of our messages will hit that limit.

What are our options for keeping SQS but somehow sending large payloads? Only thing I can think of is throwing them into another datastore and using the SQS just as a pointer to, say, a key in a Redis instance.

(Kafka is probably off the table for this, but I could be convinced. I'd like to hear other solutions first, though.

◧◩
2. mateus+Ya1[view] [source] 2019-05-27 18:52:27
>>Jemacl+OX
We have a library that puts the payload in s3 bucket under random key, the bucket has expiration policy of few days. Then we generate http link to the object and send an sqs message with this url in metadata. The reader library gets data from s3, it doesn't even have to remove it. It will disappear automatically later.

We do it "by ourselves", not using the provided lib, because that way it works both for SQS and SNS. The provided lib only supports SQS.

Also our messages aren't typically very big, so we do this only if the payload size demands it.

◧◩◪
3. Jemacl+dy1[view] [source] 2019-05-27 22:54:38
>>mateus+Ya1
Wouldn't sending and later retrieving millions of S3 objects be expensive?
[go to top]