zlacker

[return to "On SQS"]
1. Jemacl+OX[view] [source] 2019-05-27 17:05:22
>>mpweih+(OP)
We love SQS, but one of the problems we're running into lately is the 256kb per message limitation. We do tens of millions of messages per day, with a small percentage of those reaching the 256kb limit. We're approaching the point where most of our messages will hit that limit.

What are our options for keeping SQS but somehow sending large payloads? Only thing I can think of is throwing them into another datastore and using the SQS just as a pointer to, say, a key in a Redis instance.

(Kafka is probably off the table for this, but I could be convinced. I'd like to hear other solutions first, though.

◧◩
2. mark24+f01[view] [source] 2019-05-27 17:22:51
>>Jemacl+OX
The Java SDK for SQS automatically drops >256k messages into an S3 bucket and stores a pointer to the message in SQS itself. You set the bucket and the client transparently handles retrieving a message from SQS/S3 when necessary.

Based on the structure of the message (UUIDv4) you could probably roll your own implementation in any language.

[go to top]