zlacker

[return to "On SQS"]
1. really+T6[view] [source] 2019-05-27 08:07:23
>>mpweih+(OP)
Nowhere is cost mentioned. Using S3 as an ad-hoc queue is a cheaper solution, which should throw some red flags. You can easily do what SQS does for so much cheaper (this includes horizontal scaling and failure planning), that I'm consistently surprised anyone uses it. Either you are running at a volume where you need high throughput and it's pricey, or you're at such a low throughput you could use any MQ (even redis).

> Oh but what about ORDERED queues? The only way to get ordered application of writes is to perform them one after the other.

This is another WTF. Talking about ordered queues is like talking about databases, because it's data that's structured. If you can feed data from concurrent sources of unordered data to a system where access can be ordered, you have access to a sorted data. You deal with out-of-order data either in the insertions or a window in the processing or in the consumers. "Write in order" is not a requirement, but an option. Talking about technical subjects on twitter always results in some mind-numbingly idiotic statements for the sake of 144 characters.

◧◩
2. archgo+H7[view] [source] 2019-05-27 08:20:00
>>really+T6
> Using S3 as an ad-hoc queue is a cheaper solution, which should throw some red flags.

Interesting. Can you expand on this? How do you ensure that only one worker takes a message from s3? Or do you only use this setup when you have only one worker?

◧◩◪
3. really+h8[view] [source] 2019-05-27 08:26:55
>>archgo+H7
You encode messages with timestamp and origin (eg 1558945545-1), you write directly to S3 into a (create if not exists) folder for a specific windowing (let's say minute). Every agent writing, you end up with a new folder in the next minute. You have a window with an ordered set of messages by window by sort algorithm...optimally determined by the naming encoding.
[go to top]