Run glusterfs cluster using DaemonSet
Asked Answered
S

1

10

I've been trying to run a glusterfs cluster on my kubernetes cluster using those:

glusterfs-service.json

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "type": "NodePort",
    "selector": {
      "name": "gluster"
    },
    "ports": [
      {
        "port": 1
      }
    ]
  }
}

and glusterfs-server.json:

{
  "apiVersion": "extensions/v1beta1",
  "kind": "DaemonSet",
  "metadata": {
    "labels": {
      "name": "gluster"
    },
    "name": "gluster"
  },
  "spec": {
    "selector": {
      "matchLabels": {
        "name": "gluster"
      }
    },
    "template": {
      "metadata": {
        "labels": {
          "name": "gluster"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "gluster",
            "image": "gluster/gluster-centos",
            "livenessProbe": {
              "exec": {
                "command": [
                  "/bin/bash",
                  "-c",
                  "systemctl status glusterd.service"
                ]
              }
            },
            "readinessProbe": {
              "exec": {
                "command": [
                  "/bin/bash",
                  "-c",
                  "systemctl status glusterd.service"
                ]
              }
            },
            "securityContext": {
              "privileged": true
            },
            "volumeMounts": [
              {
                "mountPath": "/mnt/brick1",
                "name": "gluster-brick"
              },
              {
                "mountPath": "/etc/gluster",
                "name": "gluster-etc"
              },
              {
                "mountPath": "/var/log/gluster",
                "name": "gluster-logs"
              },
              {
                "mountPath": "/var/lib/glusterd",
                "name": "gluster-config"
              },
              {
                "mountPath": "/dev",
                "name": "gluster-dev"
              },
              {
                "mountPath": "/sys/fs/cgroup",
                "name": "gluster-cgroup"
              }
            ]
          }
        ],
        "dnsPolicy": "ClusterFirst",
        "hostNetwork": true,
        "volumes": [
          {
            "hostPath": {
              "path": "/mnt/brick1"
            },
            "name": "gluster-brick"
          },
          {
            "hostPath": {
              "path": "/etc/gluster"
            },
            "name": "gluster-etc"
          },
          {
            "hostPath": {
              "path": "/var/log/gluster"
            },
            "name": "gluster-logs"
          },
          {
            "hostPath": {
              "path": "/var/lib/glusterd"
            },
            "name": "gluster-config"
          },
          {
            "hostPath": {
              "path": "/dev"
            },
            "name": "gluster-dev"
          },
          {
            "hostPath": {
              "path": "/sys/fs/cgroup"
            },
            "name": "gluster-cgroup"
          }
        ]
      }
    }
  }
}

Then on my pod definition, I'm doing:

"volumes": [
  {
    "name": "< volume name >",
    "glusterfs": {
      "endpoints": "glusterfs-cluster.default.svc.cluster.local",
      "path": "< gluster path >",
      "readOnly": false
    }
  }
]

But the pod creation is timing out because it can't mont the volume

It also look like only one of the glusterfs pod is running

Here are my logs: https://i.sstatic.net/N4xlj.jpg

I then try to run my pod on the same namespace as my gluster cluster is running, I'm now getting this error:

Operation for "\"kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates\" (\"01a0834e-64ab-11e6-af52-42010a840072\")" failed.
No retries permitted until 2016-08-17 18:51:20.61133778 +0000 UTC (durationBeforeRetry 2m0s).
Error: MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates" (spec.Name: "ssl-certificates") pod "01a0834e-64ab-11e6-af52-42010a840072" (UID: "01a0834e-64ab-11e6-af52-42010a840072") with: glusterfs: mount failed:
mount failed: exit status 1
Mounting arguments:
10.132.0.7:ssl_certificates /var/lib/kubelet/pods/01a0834e-64ab-11e6-af52-42010a840072/volumes/kubernetes.io~glusterfs/ssl-certificates
glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/ssl-certificates/caddy-server-1648321103-epvdi-glusterfs.log]
Output: Mount failed. Please check the log file for more details. the following error information was pulled from the glusterfs log to help diagnose this issue:
[2016-08-17 18:49:20.583585] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ssl_certificates)
[2016-08-17 18:49:20.610531] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
Southernmost answered 16/8, 2016 at 14:47 Comment(2)
Are you still having this problem? Was it by chance on Google Container Engine?Rydder
One of my colleagues has successfully deployed Gluster on K8S (Probably dated by 1/2 versions, but should not matter I guess): blog.infracloud.io/gluster-heketi-kubernetesRoad
R
0

The logs clearly say what's going on:

failed to get endpoints glusterfs-cluster [endpoints "glusterfs-cluster" not found]

because:

  "ports": [
  {
    "port": 1
  }

is bogus in a couple of ways. First, a port of "1" is very suspicious. Second, it has no matching containerPort: on the DaemonSet side to which kubernetes could point that Service -- thus, it will not create Endpoints for the (podIP, protocol, port) tuple. Because glusterfs (reasonably) would want to contact the underlying Pods directly, without going through the Service, then it is unable to discover the Pods and everything comes to an abrupt halt.

Rhynd answered 7/7, 2018 at 5:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.