zlacker

[return to "On SQS"]
1. Jemacl+OX[view] [source] 2019-05-27 17:05:22
>>mpweih+(OP)
We love SQS, but one of the problems we're running into lately is the 256kb per message limitation. We do tens of millions of messages per day, with a small percentage of those reaching the 256kb limit. We're approaching the point where most of our messages will hit that limit.

What are our options for keeping SQS but somehow sending large payloads? Only thing I can think of is throwing them into another datastore and using the SQS just as a pointer to, say, a key in a Redis instance.

(Kafka is probably off the table for this, but I could be convinced. I'd like to hear other solutions first, though.

◧◩
2. somede+DY[view] [source] 2019-05-27 17:10:39
>>Jemacl+OX
We point to an s3 object for any large payloads
◧◩◪
3. Jemacl+cy1[view] [source] 2019-05-27 22:54:26
>>somede+DY
Wouldn't sending and later retrieving millions of S3 objects be expensive?
[go to top]