We are defining an architecture to collect log information by Logstash shippers which are installed in various machines and index the data in one elasticsearch server centrally and use Kibana as the graphical layer. We need a reliable messaging system in between Logstash shippers and elasticsearch to grantee the delivery. What factors should be considered when selecting Redis over RabbitMQ as a data broker/messaging system in between Logstash shippers and the elasticsearch or vice versa?
After evaluating both Redis and RabbitMQ I chose RabbitMQ as our broker for the following reasons:
- RabbitMQ allows you to use a built in layer of security by using SSL certificates to encrypt the data that you are sending to the broker and it means that no one will sniff your data and have access to your vital organizational data.
- RabbitMQ is a very stable product that can handle large amounts of events per seconds and many connections without being the bottle neck.
Regarding scaling, RabbitMQ has a built in cluster implementation that you can use in addition to a load balancer in order to implement a redundant broker environment.
Is my RabbitMQ cluster Active Active or Active Passive?
Now to the weaker point of using RabbitMQ:
- most Logstash shippers do not support RabbitMQ but on the other hand, the best one, named Beaver, has an implementation that will send data to RabbitMQ without a problem.
- The implementation that Beaver has with RabbitMQ in its current version is a little slow on performance (for my purposes) and was not able to handle the rate of 3000 events/sec from one server and from time to time the service crashed.
- Right now I am working on a fix that will solve the performance problem for RabbitMQ and make the Beaver shipper more stable. The first solution is to add more processes that can run simultaneously and will give the shipper more power. The second solution is to change Beaver to send data to RabbitMQ asynchronously which theoretically should be much faster. I hope that I’ll finish implementing both solutions by the end of this week.
You can follow the issue here: https://github.com/josegonzalez/python-beaver/issues/323
And check the pull request here: https://github.com/josegonzalez/python-beaver/pull/324
If you have more questions feel free to leave a comment.
Redis is created as a key value data store despite having some basic message broker capabilities.
RabbitMQ is created as a message broker. It has lots of message broker capabilities naturally.
I have been doing some research on this topic. If performance is important and persistence is not, RabbitMQ is a perfect choice. Redis is a technology developed with a different intent.
Following is a list of pros for using RabbitMQ over Redis:
- RabbitMQ uses Advanced Message Queuing Protocol (AMQP) which can be configured to use SSL, additional layer of security.
- RabbitMQ takes approximately 75% of the time Redis takes in accepting messages.
- RabbitMQ supports priorities for messages, which can be used by workers to consume high priority messages first.
- There is no chance of loosing the message if any worker crashes after consuming the message, which is not the case with Redis.
- RabbitMQ has a good routing system to direct messages to different queues.
A few cons for using RabbitMQ:
- RabbitMQ might be a little hard to maintain, hard to debug crashes.
- node-name or node-ip fluctuations can cause data loss, but if managed well, durable messages can solve the problem.
Sorted Sets
which allow priority queue-like interactions. Redis can also be clustered/sharded to send different messages to to different queues on different servers even. Not sure about SSL directly for Redis, but I'm looking at AWS Elasticache and their Redis 3.2.6 allows at-rest and in-transit encryption. Note: not at all saying Redis is better for this case; just pointing out those may not be reasons to choose RabbitMQ over Redis. –
Monia I have been wondering the same thing. Earlier recommendations by the Logstash folks recommend Redis over RabbitMQ (http://logstash.net/docs/1.1.1/tutorials/getting-started-centralized), however that section of the notes no longer exists in the current documentation although there are generic notes on using a broker to deal with spikes here https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html.
While I am also using RabbitMQ quite happily, I'm currently exploring a Redis broker, since the AMQP protocol is likely overkill for my logging use case.
If you specifically want to send logs from Logstash to Elasticsearch, you might want to use Filebeat instead of either Redis or RabbitMQ. Personally, I use fluent-bit to collect logs to send to Elasticsearch.
However, the other answers on this page have a lot of out-of-date information regarding Redis's capabilities. Redis has supported:
- publish/subscribe since version 2.0
- clustering since version 3.0.
- streams since version 5.0.
- SSL/TLS since version 6.0.
- a copy-on-write append-only log for persistence to disk. It works best with Redis 7.0 or newer. This log is useful for recovering from a crash.
But there are some limitations:
- Redis is still not as focused as RabbitMQ when it comes to message durability and crash recovery.
- Redis pub/sub is not as scalable as RabbitMQ. Redis pub/sub messages were not sharded by Redis cluster nodes (until relatively recently). Redis Streams are a newer, more scalable API.
Quick questions to ask:
- why do you need a broker? If you're using logstash or logstash-forwarder to read files from these servers, they both will slow down if the pipeline gets congested.
- do you have any experience with administering rabbit or redis? All things being equal, the tool you know how to use is the better tool.
In the realm of opinions, I've run redis as a broker, and hated it. Of course, that could have been my inexperience with redis (not a problem with the product itself), but it was the weakest link in the pipeline and always failed when we needed it most.
© 2022 - 2024 — McMap. All rights reserved.