zlacker

[return to "On SQS"]
1. Jemacl+OX[view] [source] 2019-05-27 17:05:22
>>mpweih+(OP)
We love SQS, but one of the problems we're running into lately is the 256kb per message limitation. We do tens of millions of messages per day, with a small percentage of those reaching the 256kb limit. We're approaching the point where most of our messages will hit that limit.

What are our options for keeping SQS but somehow sending large payloads? Only thing I can think of is throwing them into another datastore and using the SQS just as a pointer to, say, a key in a Redis instance.

(Kafka is probably off the table for this, but I could be convinced. I'd like to hear other solutions first, though.

◧◩
2. manish+qi1[view] [source] 2019-05-27 19:58:52
>>Jemacl+OX
I've successfully been using Kafka in production with a tiny percentage of packets ~1Mb. We utilize gzip compression with chunking on some of those messages (system is legacy, no fancy compressed message format in use). IIRC, only had to modify settings at the broker level and it works perfectly fine.
◧◩◪
3. Jemacl+xy1[view] [source] 2019-05-27 22:56:43
>>manish+qi1
We've discussed switching to Kafka. There are some pros/cons to doing that. With respect to my problem above, our messages _could _conceivably approach 1MB (or even surpass it), so we're really just delaying the inevitable. That said, we're a long, long way from hitting that limit, so it's definitely something we're looking at.

We just recently started gzipping our payloads, which buys us even more time.

[go to top]