How to correctly setup RabbitMQ on Openshift
Asked Answered
P

3

5

I have created new app on OpenShift using this image: https://hub.docker.com/r/luiscoms/openshift-rabbitmq/

It runs successfully and I can use it. I have added a persistent volume to it. However, every time a POD is restarted, I loos all my data. This is because RabbitMq uses a hostname to create database directory.

For example:

node           : rabbit@openshift-rabbitmq-11-9b6p7
home dir       : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash    : BsUC9W6z5M26164xPxUTkA==
log            : tty
sasl log       : tty
database dir   : /var/lib/rabbitmq/mnesia/rabbit@openshift-rabbitmq-11-9b6p7

How can I set RabbitMq to always use same database dir?

Pralltriller answered 8/8, 2017 at 13:38 Comment(0)
D
6

You should be able to set an environment variable RABBITMQ_MNESIA_DIR to override the default configuration. This can be done via the OpenShift console by add an entry to environment in the deployment config or via the oc tool, for example:

oc set env dc/my-rabbit RABBITMQ_MNESIA_DIR=/myDir

You would then need to mount the persistent volume inside the Pod at the required path. Since you have said it is already created, then you just need to update it, example:

oc volume dc/my-rabbit --add --overwrite --name=my-pv-name --mount-path=/myDir

You will need to make sure you have correct r/w access on the provided mount path

EDIT: Some additional workarounds based on issues in comments

The issues caused by the dynamic hostname could be solved in a number of ways:

1.(Preferred IMO) Move the deployment to a StatefulSet. StatefulSet will provide stability in the naming and hence network identifier of the Pod, which must be fronted by a headless service. This feature is out of beta as of Kubernetes 1.9 and tech preview in OpenShift since version 3.5

  1. Set the hostname for the Pod if Statefulsets are not an option. This can be done by adding the environment variable oc set env dc/example HOSTNAME=example to make the hostname static and setting RABBITMQ_NODENAME to do likewise.
Dunk answered 12/8, 2017 at 21:48 Comment(8)
This works, thank you. Do you maybe know how could I setup a RabbitMQ cluster? If I let my rabbit nodenames to be dervied from POD names, then I think it could work, but if the POD is restarted, then a Rabbit sets new node name and cluster would now work anymore. If I set a node name with environment variable RABBITMQ_NODENAME then DNS lookup in OpenShift for that node name doesnt work...Pralltriller
There seems to be various ways clustering could be achieved but you might look at using stateful sets. Linking a guide for configuration on kubernetes you might find useful wesmorgan.svbtle.com/…Dunk
Hi. Tried your solution here. But it doesn't work for me. RabbitMQ reads the database but then it tries to connect to the previous node that created it and I get this {could_not_start,rabbit, {{failed_to_cluster_with, ['madelink-rabbitmq-1@rabbitmq-11-71xk8'], "Mnesia could not connect to any nodes."}, {rabbit,start,[normal,[]]}}}. It seems like the node variable is still changing (actually madelink-rabbitmq-1@rabbitmq-11-fztv5)Incrocci
@Incrocci need more info. What are you trying to do? What steps have you takenDunk
@user2983542 I'm trying to do the exact same thing as the OP, trying to persist data of a RabbitMQ pod in Openshift. I created the deployment configuration using openshift provided templates, and set up a persistent volume. Then I set up the RABBITMQ_MNESIA_DIR so it wouldn't move. But when I delete the pod and let it be recreated manually, the pod gets another name. RabbitMQ reads the data in the right directory but notices that another pod with another name (the previous, deleted one) exists and tries to connect to it. And I get the error above.Incrocci
Setting the variable RABBITMQ_NODENAME just changed the first part of the node name : rabbitmq-1@ . The other part is still generated using the hostnameIncrocci
@Incrocci ok as well as the answer below one thing you could do would be to create the pod in a stateful set so that the name will be consistent across restartDunk
Alternatively, if you set RABBITMQ_NODENAME to a value with ending with @localhost, the new instance will use the same mnesia files even though the actual pod has a different name every time.Caracara
I
1

I was able to get it to work by setting the HOSTNAME environment variable. OSE normally sets that value to the pod name, so it changes everytime the pod restarts. By setting it the pod's hostname doesn't change when the pod restarts.

Combined with a Persistent Volume the the queues, messages users and i assume whatever other configuration is persisted through pod restarts.

This was done on an OSE 3.2 server. I just added an environment variable to the deployment config. You can do it through the UI or with the OC CLI:

oc set env dc/my-rabbit HOSTNAME=some-static-name

This will probably be an issue if you run multiple pods for the service, but in that case you would need to setup proper RabbitMq clustering, which is a whole different beast.

Ita answered 6/1, 2018 at 4:52 Comment(0)
G
1

The easiest and production-safest way to run RabbitMQ on K8s including OpenShift is the RabbitMQ Cluster Operator.

See this video on how to deploy RabbitMQ on OpenShift.

Goldi answered 5/2, 2021 at 10:56 Comment(1)
Most probably the other answers are simply outdated. If the thing works as described then it is definitely the best way. Thank you for your answer!Allynallys

© 2022 - 2024 — McMap. All rights reserved.