zlacker

[parent] [thread] 20 comments
1. polski+(OP)[view] [source] 2019-05-27 14:30:54
Does anyone know a good, low overhead out-of-process message queue, that's lightweight enough that it can be useful for communicating between processes on the same machine, but if necessary it can scale beyond it? In case of a single-machine product that comprises of several services, a message queue can sometimes be useful for pull model, but adding RabbitMQ to the stack makes installation and ops much more complex than customers deem acceptable.

I know some people use Akka with Persistence module, but I would welcome other alternatives.

replies(4): >>notdun+k1 >>wmfiv+r3 >>gizzlo+C3 >>eroppl+d4
2. notdun+k1[view] [source] 2019-05-27 14:43:58
>>polski+(OP)
Redis fits this niche for me most of the time, but I always start in-process for simplicity. If durability is all you need, SQLite is a great reliable option.
3. wmfiv+r3[view] [source] 2019-05-27 15:05:07
>>polski+(OP)
Depending on the project a database (Postgres) can make a nice database, queue and cache. You get transactions among all three which which makes life much simpler in the beginning and it will happily march along processing thousands or tens of thousands of TPS. You can scale by adding read replicas and eventually moving to separate databases for each purpose.

If you're on the JVM then ActiveMQ (and other Java queues) will usually run embedded and have options to use a database for persistence.

4. gizzlo+C3[view] [source] 2019-05-27 15:07:21
>>polski+(OP)
Guess it depends on the definition of "queue". Potentials:

  - https://nsq.io/
  - https://nats.io/
replies(3): >>eroppl+K4 >>polski+td >>polski+PJ
5. eroppl+d4[view] [source] 2019-05-27 15:15:52
>>polski+(OP)
"Low overhead"? Redis.

Reliable and durable? Emphatically not Redis, and at that point you probably want to just start looking at SQS.

(I've had good luck shipping products as docker-compose files, these days. Even to fairly backwards clients.)

replies(2): >>polski+X5 >>tjanks+g6
◧◩
6. eroppl+K4[view] [source] [discussion] 2019-05-27 15:21:34
>>gizzlo+C3
I am curious what definition of "queue" fits NSQ. It's a distributed messaging platform.
replies(1): >>gizzlo+Y8
◧◩
7. polski+X5[view] [source] [discussion] 2019-05-27 15:31:11
>>eroppl+d4
I guess I didn't make it clear. Solution has to work in-house / in private cloud.
◧◩
8. tjanks+g6[view] [source] [discussion] 2019-05-27 15:33:38
>>eroppl+d4
Can you explain why Redis is not durable? Looking into it for a project but this comment worries me.
replies(2): >>chucks+Kf >>Jemacl+Di
◧◩◪
9. gizzlo+Y8[view] [source] [discussion] 2019-05-27 15:56:12
>>eroppl+K4
The normal one. It puts stuff in a queue.

"nsqd is the daemon that receives, queues, and delivers messages to clients."

◧◩
10. polski+td[view] [source] [discussion] 2019-05-27 16:28:56
>>gizzlo+C3
Do they offer at-least-once or better delivery guarantees?
◧◩◪
11. chucks+Kf[view] [source] [discussion] 2019-05-27 16:46:30
>>tjanks+g6
Intentionally so. It's not a deficiency or a footgun, it's a design decision to be aware of. Redis is an in-memory database first.

You can configure Redis for durability. The docs[1] page for persistence has a good examination of the pros and cons.

[1]: https://redis.io/topics/persistence

replies(1): >>cookie+471
◧◩◪
12. Jemacl+Di[view] [source] [discussion] 2019-05-27 17:07:21
>>tjanks+g6
In my experience Redis is plenty durable (especially in Elasticache with replicas + multi-region + backups). Redis _is_ an in-memory data store, so if the server crashes, you lose the data, but if you have replicas it'll fail over, if you have backups you can restore, and if you have multi-region you can failover to the other region. IMO, the idea that Redis is not durable enough is outdated.

It's something to be aware of and to have backup plans for, but we've been using Redis as our primary datastores for over a year with only one or two instance failures which were quickly resolved within minutes by failing over to replicas, with no data loss.

replies(1): >>maniga+wY
◧◩
13. polski+PJ[view] [source] [discussion] 2019-05-27 21:13:52
>>gizzlo+C3
Seems like NATS streaming would fit my case - have you heard of any real world deployments that use it ? Are there any larger issues that don't make it a good choice ?
replies(1): >>maniga+mY
◧◩◪
14. maniga+mY[view] [source] [discussion] 2019-05-28 00:14:01
>>polski+PJ
NATS Streaming is not as well tested and has some design issues that make scaling hard. NATS itself has a new version 2 that has a protocol update and NATS Streaming should follow with a new design as well, but I would recommend other options if you want persistence.
replies(3): >>tuxych+p71 >>polski+hf1 >>polski+wo1
◧◩◪◨
15. maniga+wY[view] [source] [discussion] 2019-05-28 00:16:41
>>Jemacl+Di
Redis won't lose data from just a server crash. It has persistence with streaming AOF mode and snapshot RDB mode, and they can be combined. fsync is also configurable. It can be set to write every operation to disk but most set it to every 1 second which is safe enough with replicas.
◧◩◪◨
16. cookie+471[view] [source] [discussion] 2019-05-28 02:27:26
>>chucks+Kf
antirez put a lot of work in around Redis 3.0 (iirc) to make the persistence reliable and strong. As long as your server is configured and used correctly (obviously a large caveat, but you can only hold someone's hand so much), I don't think there is any reason to doubt Redis's persistence anymore.

It's important to make this distinction because there are commonly-used systems that offer a best-effort style persistence that usually works fine, but explicitly warn developers not to trust it, and developers rarely understand that.

We badly need to get better at distinguishing between true, production-level data integrity and "probably fine".

◧◩◪◨
17. tuxych+p71[view] [source] [discussion] 2019-05-28 02:31:14
>>maniga+mY
Can you share some details or point to articles describing the design issues with NATS streaming?
replies(1): >>maniga+wD2
◧◩◪◨
18. polski+hf1[view] [source] [discussion] 2019-05-28 04:37:16
>>maniga+mY
What are the design flaws that you have in mind? Is it ok for a couple of nodes or even then it would have trouble to keep up with a medium load? Or maybe the design flaws are to do with providing durability and other guarantees?

What other options would you recommend, that can provide at least once delivery and are lighweight enough not to require zookeeper etc?

replies(1): >>maniga+vD2
◧◩◪◨
19. polski+wo1[view] [source] [discussion] 2019-05-28 06:55:15
>>maniga+mY
Is this what you meant ?

https://github.com/nats-io/nats-streaming-server/issues/168

◧◩◪◨⬒
20. maniga+vD2[view] [source] [discussion] 2019-05-28 17:53:05
>>polski+hf1
Nats streaming isn't just a persistence layer to NATS. It's an entirely different system that basically acts as a client to NATS and then records messages it sees. Basically think of how you would design a persistent queue on top of the ephemeral NATS pub/sub and that's what NATS streaming is.

Here's a good post (and series) about distributed logs and NATS design issues: https://bravenewgeek.com/building-a-distributed-log-from-scr...

◧◩◪◨⬒
21. maniga+wD2[view] [source] [discussion] 2019-05-28 17:53:17
>>tuxych+p71
See the other comment but this is a good post/series: https://bravenewgeek.com/building-a-distributed-log-from-scr...
[go to top]