RabbitMQ clustering and mirror queues behavior behind the scenes
Asked Answered
T

1

12

Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node?

From what I read, it seems that all actions other than publishes go only to the master and the master then broadcasts the effect of the actions to the slaves(this is from the documentation). Form my understanding it means a consumer will always consume message from the master queue. Also, if I send a request to a slave for consuming a message, that slave will do an extra hop by getting to the master for fetching that message.

But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master?

It seems there are so many extra hops when dealing with slaves, so it seems you could have a better performance if you know only the master. But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to?

Asking all of this because we are using RabbitMQ cluster with HAProxy in front, so we can decouple the cluster structure from our apps. This way, whenever a node goes done, the HAProxy will redirect to living nodes. But we have problems when we kill one of the rabbit nodes. The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them.

Even with all of this, messages can still be lost, because they may be in transit when I kill a node (in some buffers, somewhere on the network etc). So you have to use transactions or publisher confirms, which guarantee the delivery after all the mirrors have been filled up with the message. But here another issue. You may have duplicate messages, because the broker might have sent a confirmation that never reached the producer (due to network failures, etc). Therefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner.

Is there a way of avoiding this? Or I have to decide whether I can lose couple of messages versus duplication of some messages?

Toein answered 24/11, 2014 at 12:17 Comment(0)
K
18

Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node?

This blog outlines exactly what happens.

But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master?

The message will be redirected to the master Queue - that is, the node on which the Queue was created.

But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to?

Again, this is covered here. Essentially, you need a separate service that polls RabbitMQ and determines whether nodes are alive or not. RabbitMQ provides a management API for this. Your publishing and consuming applications need to refer to this service either directly, or through a mutual data-store in order to determine that correct node to publish to or consume from.

The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them.

You need to subscribe to connection-interrupted events to react to severed connections. You will need to build in some level of redundancy on the client in order to ensure that messages are not lost. I suggest, as above, that you introduce a service specifically designed to interrogate RabbitMQ. You client can attempt to publish a message to the last known active connection, and should this fail, the client might ask the monitor service for an up-to-date listing of the RabbitMQ cluster. Assuming that there is at least one active node, the client may then establish a connection to it and publish the message successfully.

Even with all of this, messages can still be lost, because they may be in transit when I kill a node

There are certain edge-cases that you can't cover with redundancy, and neither can RabbitMQ. For example, when a message lands in a Queue, and the HA policy invokes a background process to copy the message to a backup node. During this process there is potential for the message to be lost before it is persisted to the backup node. Should the active node immediately fail, the message will be lost for good. There is nothing that can be done about this. Unfortunately, when we get down to the level of actual bytes travelling across the wire, there's a limit to the amount of safeguards that we can build.

herefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner.

You can handle this a number of ways. For example, setting the message-ttl to a relatively low value will ensure that duplicated messages don't remain on the Queue for extended periods of time. You can also tag each message with a unique reference, and check that reference at the consumer level. Of course, this would require storing a cache of processed messages to compare incoming messages against; the idea being that if a previously processed message arrives, its tag will have been cached by the consumer, and the message can be ignored.

One thing that I'd stress with AMQP and Queue-based solutions in general is that your infrastructure provides the tools, but not the entire solution. You have to bridge those gaps based on your business needs. Often, the best solution is derived through trial and error. I hope my suggestions are of use. I blog about a number of RabbitMQ design solutions here, including the issues you mentioned, here if you're interested.

Kalinin answered 25/11, 2014 at 17:43 Comment(4)
Thank you Paul. You are a god. Just to make sure before I move to implementation can you please confirm this: 1)I can use still use HAProxy and publisher confirms and I won't lose any message. I will have duplicate messages, which I have to remove somehow. I will have performance issues(due to extra hops to the master when first reaching the slaves), but my data will be "bullet-proof". 2)In order to increase performance, I will create a monitor service so I will send my requests only to the master every time, but I still need to deal with duplicates. Thanks.Toein
You can still use HAProxy, but you'll incur extra network hops with a round-robin configuration. If you want to achieve even load-balancing, please read this: insidethecpu.com/2014/11/17/load-balancing-a-rabbitmq-cluster It's very unlikely that you will have duplicate messages. I think that setting the message-ttl property is sufficient to remove duplicates, though adding a reference-tag, as I mentioned, will solve the problem. I'll be releasing a RabbitMQ library in C# that achieves all of the above, shortly. Keep monitoring my blog for updates.Kalinin
Actually I did end up having duplicate messages. I ran a test couple of times publishing 10000 messages to a 2 node Rabbit cluster. I killed one node and I got 10011-10012 messages. One of my consuming API is idempotent, so the final result was ok. Thanks a lot.Toein
That's very interesting and worth looking into. You're welcome.Kalinin

© 2022 - 2024 — McMap. All rights reserved.