Rabbit mq - Error while waiting for Mnesia tables
C

7

31

I have installed rabbitmq using helm chart on a kubernetes cluster. The rabbitmq pod keeps restarting. On inspecting the pod logs I get the below error

2020-02-26 04:42:31.582 [warning] <0.314.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_durable_queue]}
2020-02-26 04:42:31.582 [info] <0.314.0> Waiting for Mnesia tables for 30000 ms, 6 retries left

When I try to do kubectl describe pod I get this error

Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-rabbitmq-0
    ReadOnly:   false
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-config
    Optional:  false
  healthchecks:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-healthchecks
    Optional:  false
  rabbitmq-token-w74kb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rabbitmq-token-w74kb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                      From                                               Message
  ----     ------     ----                     ----                                               -------
  Warning  Unhealthy  3m27s (x878 over 7h21m)  kubelet, gke-analytics-default-pool-918f5943-w0t0  Readiness probe failed: Timeout: 70 seconds ...
Checking health of node [email protected] ...
Status of node [email protected] ...
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}

I have provisioned the above on Google Cloud on a kubernetes cluster. I am not sure during what specific situation it started failing. I had to restart the pod and since then it has been failing.

What is the issue here ?

Choosey answered 26/2, 2020 at 5:2 Comment(8)
Have you tried to describe the running pod? Could you provide more information about your setup? Is it cloud provisioned? Is it failing on specific terms or just fails after the helm install?Esthete
This is the error I get. I have updated the question with the error details Error: {:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}} Error: {:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}} Choosey
Which exactly helm chart did you use?Esthete
ok let me test with the latest helm chart and try once againChoosey
This is the helm chart that I used. github.com/helm/charts/tree/master/stable/rabbitmq. These are values that I used - github.com/helm/charts/blob/master/stable/rabbitmq/… This part alone was commented in values-production.yaml # extraPlugins: "rabbitmq_auth_backend_ldapChoosey
When I uninstalled and installed rabbitmq using helm it was using the same persistent volume. I tried deleting the persistent volume and reinstalled rabbitmq. The pods are running now without any error. Thanks for the helpChoosey
Could you solve your problem?Tamanaha
@AmirSoleimani - Your solutions worksChoosey
T
-7

test this deploy:

kind: Service
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq
  labels:
    app: rabbitmq
    type: LoadBalancer  
spec:
  type: NodePort
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
     nodePort: 31672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
     nodePort: 30672
   - name: stomp
     protocol: TCP
     port: 61613
     targetPort: 61613
  selector:
    app: rabbitmq
---
kind: Service 
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq-lb
  labels:
    app: rabbitmq
spec:
  # Headless service to give the StatefulSet a DNS which is known in the cluster (hostname-#.app.namespace.svc.cluster.local, )
  # in our case - rabbitmq-#.rabbitmq.rabbitmq-namespace.svc.cluster.local  
  clusterIP: None
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
   - name: stomp
     port: 61613
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq-config
  namespace: rabbitmq-namespace
data:
  enabled_plugins: |
      [rabbitmq_management,rabbitmq_peer_discovery_k8s,rabbitmq_stomp].

  rabbitmq.conf: |
      ## Cluster formation. See http://www.rabbitmq.com/cluster-formation.html to learn more.
      cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
      cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
      ## Should RabbitMQ node name be computed from the pod's hostname or IP address?
      ## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
      ## Set to "hostname" to use pod hostnames.
      ## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
      ## environment variable.
      cluster_formation.k8s.address_type = hostname   
      ## Important - this is the suffix of the hostname, as each node gets "rabbitmq-#", we need to tell what's the suffix
      ## it will give each new node that enters the way to contact the other peer node and join the cluster (if using hostname)
      cluster_formation.k8s.hostname_suffix = .rabbitmq.rabbitmq-namespace.svc.cluster.local
      ## How often should node cleanup checks run?
      cluster_formation.node_cleanup.interval = 30
      ## Set to false if automatic removal of unknown/absent nodes
      ## is desired. This can be dangerous, see
      ##  * http://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
      ##  * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
      cluster_formation.node_cleanup.only_log_warning = true
      cluster_partition_handling = autoheal
      ## See http://www.rabbitmq.com/ha.html#master-migration-data-locality
      queue_master_locator=min-masters
      ## See http://www.rabbitmq.com/access-control.html#loopback-users
      loopback_users.guest = false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: rabbitmq-namespace
spec:
  serviceName: rabbitmq
  replicas: 3
  selector:
    matchLabels:
      name: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
        name: rabbitmq
        state: rabbitmq
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      serviceAccountName: rabbitmq
      terminationGracePeriodSeconds: 10
      containers:        
      - name: rabbitmq-k8s
        image: rabbitmq:3.8.3
        volumeMounts:
          - name: config-volume
            mountPath: /etc/rabbitmq
          - name: data
            mountPath: /var/lib/rabbitmq/mnesia
        ports:
          - name: http
            protocol: TCP
            containerPort: 15672
          - name: amqp
            protocol: TCP
            containerPort: 5672
        livenessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 60
          periodSeconds: 60
          timeoutSeconds: 10
        resources:
            requests:
              memory: "0"
              cpu: "0"
            limits:
              memory: "2048Mi"
              cpu: "1000m"
        readinessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10
        imagePullPolicy: Always
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: RABBITMQ_USE_LONGNAME
            value: "true"
          # See a note on cluster_formation.k8s.address_type in the config file section
          - name: RABBITMQ_NODENAME
            value: "rabbit@$(HOSTNAME).rabbitmq.$(NAMESPACE).svc.cluster.local"
          - name: K8S_SERVICE_NAME
            value: "rabbitmq"
          - name: RABBITMQ_ERLANG_COOKIE
            value: "mycookie"      
      volumes:
        - name: config-volume
          configMap:
            name: rabbitmq-config
            items:
            - key: rabbitmq.conf
              path: rabbitmq.conf
            - key: enabled_plugins
              path: enabled_plugins
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - "ReadWriteOnce"
      storageClassName: "default"
      resources:
        requests:
          storage: 3Gi

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbitmq 
  namespace: rabbitmq-namespace 
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace 
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace
subjects:
- kind: ServiceAccount
  name: rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
Tamanaha answered 19/7, 2020 at 11:25 Comment(3)
To be honest, this is not really helpful. Why would somebody want to just try it out, along with a bunch of configuration which is not relevant to him/her?Dairy
@HarisOsmanagić you can edit configmap depends on your usage.Tamanaha
No, I can't, because there's too much in it. Everybody would appreciate seeing the exact parameter which fixes an issue, and not having to experiment with another config map.Dairy
C
49

TLDR

helm upgrade rabbitmq --set clustering.forceBoot=true

Problem

The problem happens for the following reason:

  • All RMQ pods are terminated at the same time due to some reason (maybe because you explicitly set the StatefulSet replicas to 0, or something else)
  • One of them is the last one to stop (maybe just a tiny bit after the others). It stores this condition ("I'm standalone now") in its filesystem, which in k8s is the PersistentVolume(Claim). Let's say this pod is rabbitmq-1.
  • When you spin the StatefulSet back up, the pod rabbitmq-0 is always the first to start (see here).
  • During startup, pod rabbitmq-0 first checks whether it's supposed to run standalone. But as far as it can see on its own filesystem, it's part of a cluster. So it checks for its peers and doesn't find any. This results in a startup failure by default.
  • rabbitmq-0 thus never becomes ready.
  • rabbitmq-1 is never starting because that's how StatefulSets are deployed - one after another. If it were to start, it would start successfully because it sees that it can run standalone as well.

So in the end, it's a bit of a mismatch between how RabbitMQ and StatefulSets work. RMQ says: "if everything goes down, just start everything and the same time, one will be able to start and as soon as this one is up, the others can rejoin the cluster." k8s StatefulSets say: "starting everything all at once is not possible, we'll start with the 0".

Solution

To fix this, there is a force_boot command for rabbitmqctl which basically tells an instance to start standalone if it doesn't find any peers. How you can use this from Kubernetes depends on the Helm chart and container you're using. In the Bitnami Chart, which uses the Bitnami Docker image, there is a value clustering.forceBoot = true, which translates to an env variable RABBITMQ_FORCE_BOOT = yes in the container, which will then issue the above command for you.

But looking at the problem, you can also see why deleting PVCs will work (other answer). The pods will just all "forget" that they were part of a RMQ cluster the last time around, and happily start. I would prefer the above solution though, as no data is being lost.

Cynthla answered 10/3, 2021 at 14:59 Comment(8)
Also, there is a section about recovery at the Bitnami github, which mentions both forceBoot and one more option (using Parallel podManagementPolicy) github.com/bitnami/charts/tree/master/bitnami/…Sandpiper
(1.) FYI the same issue "Error while waiting for Mnesia tables" happens when you have 3 machines running rabbitmq and configured for clustering. so it's not just a kubernetes, helm issue. (2.) unfortunately for rabbitmq there is no config file setting for clustering.forceBoot. so i will have to clear out the /var/lib/rabbitmq/mnesia to get the servers to start. all of them right now are stuck in a boot loop... "inconsistent_database". (3.) in my case if you stop your rabbit cluster with server1,2,3 then then you MUST start in LIFO order i.e. 3,2,1. this allows you to start without issue.Spate
relevant rabbitmq docs: rabbitmq.com/clustering.html#restartingSpate
This should be marked as answer. Thank you Ulli, you saved me hours of troubleshooting!Fidellia
I had to add podManagementPolicy=Parallel as well. Even with the forceBoot option rabbitmq-0 was not startingCoppery
I am right now in a situation in single node docker swarm (where ordering isn't enforced by statefulset). I started all three rabbitmq nodes (3 services, not 1 with 2 replicas) at the same time after docker upgrade and they are all waiting on each other. Somehow the shutdown caused by containerd restart caused them to think that none of them is the master now and they refuse to elect one. I will set rabbit1 forceBoot=true and see what happens. (Related issue here: github.com/helm/charts/issues/13485)Smirk
Thank you, Ulli, for literally saving my day)Cronus
this is dangerous advice and should not be used as a standard way of running RabbitMQ. See details in https://mcmap.net/q/459543/-rabbit-mq-error-while-waiting-for-mnesia-tables by @Michael KlishinTiaratibbetts
R
29

RabbitMQ core team member here. Force booting nodes was never meant to be used by default. The command exist only to make nodes boot when a number of cluster members are lost permanently and thus will never be coming back.

Using force_boot is a hacky workaround that masks the fundamental problem or a set of problems, and can be dangerous (see below).

On Kubernetes, there is a common-to-see scenario where an unfortunately picked readiness probe can deadlock a cluster restart. Here are the relevant doc sections:

Here is a simplified short version:

  • In 3.x, RabbitMQ nodes expect their peers to come online within 5 minutes before continuing boot
  • A stateful set controller would not proceed to start any other nodes until the current one passes a readiness probe
  • Many probes effectively require a formed cluster, a single node will not suffice (unless there ever only was a single node)
  • Therefore the deployment process is deadlocked

With a basic readiness probe (see the doc links above), all nodes must all boot within (by default) 5 minutes, a period that can be extended if necessary. But forcing nodes to boot by default is wrong and should be unnecessary in 99% of scenarios.

In combination with other deployment time events, forced boots can lead to some of the same outcomes described in this recommended against deployment strategy.

Combining podManagementPolicy: parallel with a simple "single node" readiness probe mentioned in the docs would be a much safer solution that does not abuse a feature created as a last resort for exceptional circumstances when a portion of the cluster is permanently lost. podManagementPolicy: parallel what the RabbitMQ cluster Operator uses, so the RabbitMQ core team practices what we preach.

Rigi answered 6/5, 2024 at 23:46 Comment(1)
Fully agree hereTootsy
C
15

Just deleted the existing persistent volume claim and reinstalled rabbitmq and it started working.

So every time after installing rabbitmq on a kubernetes cluster and if I scale down the pods to 0 and when I scale up the pods at a later time I get the same error. I also tried deleting the Persistent Volume Claim without uninstalling the rabbitmq helm chart but still the same error.

So it seems each time I scale down the cluster to 0, I need to uninstall the rabbitmq helm chart, delete the corresponding Persistent Volume Claims and install the rabbitmq helm chart each time to make it working.

Choosey answered 26/2, 2020 at 12:56 Comment(5)
Be aware that deleting persistent volume claims may destroy your data. Wouldn't do this in production.Cynthla
i agree that generally deleting persistent volumes can make you lose your data. but specifically for rabbitmq developers (and many other rabbitmq users i know) we don't need or want or use any rabbitmq persistance features. my colleague informed me that he sets the helm persistance to false and doesn't have any pvc for rabbitmq. which is what i'll try too.Spate
your solution of deleting the pod and pvc worked for me. i think rabbitmq was not shutdown gracefully and that resulted in the mnesia files getting corrupted and so by deleting the pvc and pod i got a new pod with an empty pvc and so the helm chart for rabbitmq was able to finally come back online successfully.Spate
running rabbitmq helm chart with persistance set to false definitely is better: 1. more simple config 2. less resources requested from kubernetes 3. better robustness to coming up and down 4. more reliable (no need for me to write a custom script to check for bad state and then deleting pod/pvc)Spate
For anyone else here else experiencing the same issue in tandem with Docker. I experienced this as well and saw this answer.Deleting the RabbitMQ Docker container and having it re-pull the image fixed the issue for our application.Supination
E
2

IF you are in the same scenario like me and you don't know who deployed the helm chart and how was it deployed... you can edit the statefulset directly to avoid messing up more things..

I was able to make it work without deleting the helm_chart

kubectl -n rabbitmq edit statefulsets.apps rabbitmq

under the spec section I added as following the env variable RABBITMQ_FORCE_BOOT = yes:

    spec:
      containers:
      - env:
        - name: RABBITMQ_FORCE_BOOT # New Line 1 Added
          value: "yes"              # New Line 2 Added

And that should fix the issue also... please first try to do it in a proper way as is explained above by Ulli.

Enosis answered 21/3, 2022 at 21:54 Comment(0)
L
1

In my case solution was simple

Step1: Downscale the statefulset it will not delete the PVC.

kubectl scale statefulsets rabbitmq-1-rabbitmq --namespace teps-rabbitmq --replicas=1

Step2: Access the RabbitMQ Pod.

kubectl exec -it rabbitmq-1-rabbitmq-0 -n Rabbit

Step3: Reset the cluster

rabbitmqctl stop_app
rabbitmqctl force_boot

Step4:Rescale the statefulset

  kubectl scale statefulsets rabbitmq-1-rabbitmq --namespace teps-rabbitmq --replicas=4
Loosejointed answered 8/12, 2021 at 6:44 Comment(0)
D
0

I also got a similar kind of error as given below.

2020-06-05 03:45:37.153 [info] <0.234.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2020-06-05 03:46:07.154 [warning] <0.234.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_topic_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]} 2020-06-05 03:46:07.154 [info] <0.234.0> Waiting for Mnesia tables for 30000 ms, 8 retries left

In my case, the slave node(server) of the RabbitMQ cluster was down. Once I started the slave node, master node's started without an error.

Durarte answered 5/6, 2020 at 4:0 Comment(1)
I tried a lot to solve the problem, in the end, I used the RabbitMQ operator. rabbitmq.com/kubernetes/operator/operator-overview.htmlTamanaha
T
-7

test this deploy:

kind: Service
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq
  labels:
    app: rabbitmq
    type: LoadBalancer  
spec:
  type: NodePort
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
     nodePort: 31672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
     nodePort: 30672
   - name: stomp
     protocol: TCP
     port: 61613
     targetPort: 61613
  selector:
    app: rabbitmq
---
kind: Service 
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq-lb
  labels:
    app: rabbitmq
spec:
  # Headless service to give the StatefulSet a DNS which is known in the cluster (hostname-#.app.namespace.svc.cluster.local, )
  # in our case - rabbitmq-#.rabbitmq.rabbitmq-namespace.svc.cluster.local  
  clusterIP: None
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
   - name: stomp
     port: 61613
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq-config
  namespace: rabbitmq-namespace
data:
  enabled_plugins: |
      [rabbitmq_management,rabbitmq_peer_discovery_k8s,rabbitmq_stomp].

  rabbitmq.conf: |
      ## Cluster formation. See http://www.rabbitmq.com/cluster-formation.html to learn more.
      cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
      cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
      ## Should RabbitMQ node name be computed from the pod's hostname or IP address?
      ## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
      ## Set to "hostname" to use pod hostnames.
      ## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
      ## environment variable.
      cluster_formation.k8s.address_type = hostname   
      ## Important - this is the suffix of the hostname, as each node gets "rabbitmq-#", we need to tell what's the suffix
      ## it will give each new node that enters the way to contact the other peer node and join the cluster (if using hostname)
      cluster_formation.k8s.hostname_suffix = .rabbitmq.rabbitmq-namespace.svc.cluster.local
      ## How often should node cleanup checks run?
      cluster_formation.node_cleanup.interval = 30
      ## Set to false if automatic removal of unknown/absent nodes
      ## is desired. This can be dangerous, see
      ##  * http://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
      ##  * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
      cluster_formation.node_cleanup.only_log_warning = true
      cluster_partition_handling = autoheal
      ## See http://www.rabbitmq.com/ha.html#master-migration-data-locality
      queue_master_locator=min-masters
      ## See http://www.rabbitmq.com/access-control.html#loopback-users
      loopback_users.guest = false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: rabbitmq-namespace
spec:
  serviceName: rabbitmq
  replicas: 3
  selector:
    matchLabels:
      name: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
        name: rabbitmq
        state: rabbitmq
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      serviceAccountName: rabbitmq
      terminationGracePeriodSeconds: 10
      containers:        
      - name: rabbitmq-k8s
        image: rabbitmq:3.8.3
        volumeMounts:
          - name: config-volume
            mountPath: /etc/rabbitmq
          - name: data
            mountPath: /var/lib/rabbitmq/mnesia
        ports:
          - name: http
            protocol: TCP
            containerPort: 15672
          - name: amqp
            protocol: TCP
            containerPort: 5672
        livenessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 60
          periodSeconds: 60
          timeoutSeconds: 10
        resources:
            requests:
              memory: "0"
              cpu: "0"
            limits:
              memory: "2048Mi"
              cpu: "1000m"
        readinessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10
        imagePullPolicy: Always
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: RABBITMQ_USE_LONGNAME
            value: "true"
          # See a note on cluster_formation.k8s.address_type in the config file section
          - name: RABBITMQ_NODENAME
            value: "rabbit@$(HOSTNAME).rabbitmq.$(NAMESPACE).svc.cluster.local"
          - name: K8S_SERVICE_NAME
            value: "rabbitmq"
          - name: RABBITMQ_ERLANG_COOKIE
            value: "mycookie"      
      volumes:
        - name: config-volume
          configMap:
            name: rabbitmq-config
            items:
            - key: rabbitmq.conf
              path: rabbitmq.conf
            - key: enabled_plugins
              path: enabled_plugins
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - "ReadWriteOnce"
      storageClassName: "default"
      resources:
        requests:
          storage: 3Gi

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbitmq 
  namespace: rabbitmq-namespace 
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace 
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace
subjects:
- kind: ServiceAccount
  name: rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
Tamanaha answered 19/7, 2020 at 11:25 Comment(3)
To be honest, this is not really helpful. Why would somebody want to just try it out, along with a bunch of configuration which is not relevant to him/her?Dairy
@HarisOsmanagić you can edit configmap depends on your usage.Tamanaha
No, I can't, because there's too much in it. Everybody would appreciate seeing the exact parameter which fixes an issue, and not having to experiment with another config map.Dairy

© 2022 - 2025 — McMap. All rights reserved.