Cannot connect to a Mongodb pod in Kubernetes (Connection refused)
Asked Answered
M

1

6

I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work.

These are the steps I took:

  • I started a Kubernetes pod running Mongodb on a remote virtual machine.
  • Then I exposed it through a Kubernetes NodePort service.
  • Then I tried to connect to the Mongodb instance from my laptop, but it didn't work.

Here is the command I used to try to connect:

$ mongo host:NodePort   

(by "host" I mean the Kubernetes master).

And here is its output:

MongoDB shell version v4.0.3
connecting to: mongodb://host:NodePort/test
2018-10-24T21:43:41.462+0200 E QUERY    [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException:
Error connecting to host:NodePort :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:257:13
@(connect):1:6
exception: connect failed

From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:

db._adminCommand( {getCmdLineOpts: 1}

And here is the output:

{
"argv" : [
    "mongod",
    "--bind_ip",
    "0.0.0.0"
],
"parsed" : {
    "net" : {
        "bindIp" : "0.0.0.0"
    }
},
"ok" : 1
}

So the Mongodb server should actually be accessible remotely.

I can't figure out whether the problem is caused by Kubernetes or by Mongodb.

As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.

Could someone help me shed some light on this? Or tell me how to debug this problem?

EDIT:

Here is the output of the kubectl describe deployment <mongo-deployment> command, as per your request:

Name:                   mongo-remote
Namespace:              default
CreationTimestamp:      Thu, 25 Oct 2018 06:31:24 +0000
Labels:                 name=mongo-remote
Annotations:            deployment.kubernetes.io/revision=1
Selector:               name=mongo-remote
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  name=mongo-remote
  Containers:
   mongocontainer:
    Image:      mongo:4.0.3
    Port:       5000/TCP
    Host Port:  0/TCP
    Command:
      mongod
      --bind_ip
      0.0.0.0
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   mongo-remote-655478448b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set mongo-remote-655478448b to 1

For the sake of completeness, here is the yaml description of the deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo-remote
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo-remote
    spec:
      containers:
        - name: mongocontainer
          image: mongo:4.0.3
          imagePullPolicy: Always
          command:
          - "mongod"
          - "--bind_ip"
          - "0.0.0.0"
          ports:
          - containerPort: 5000
            name: mongocontainer
      nodeSelector:
        kubernetes.io/hostname: xxx
Magus answered 24/10, 2018 at 21:59 Comment(7)
Where is your mongodb/Kubernetes running? Also can you post kubectl describe deployment <mongo-deployment>? or statefulset if mongo is running on a StatefulSetBailable
What do you mean by where is it running? As I said, the pod is running on a virtual machine (more specifically, a worker node of my Kubernetes cluster). Let me know if you'd like to know something else. I edited my original post to answer your second question, because it wouldn't fit here.Magus
Can you access any other Kubernetes service through Node Port? And what is the OS of the Nodes?Eyre
Yes, I can. As I said in the original post: "As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop)." The OS for all the nodes is Ubuntu 16.04.Magus
The config looks good. Looks like a firewall rule on your on your node. I asked you which cloud your VM is running on? your own cloud? Are you allowing the NodePort from your laptop to wherever you are running the pod?Bailable
Yes, it's my own cloud (it's neither AWS nor GCP). I was simply given some remote virtual machines and I deployed a Kubernetes cluster on them. I'm not sure what you mean by "allowing the NodePort" , though I didn't have to perform any particular action when I did this with MySQL (other than exposing the MySQL deployment with a NodePort service, obviously). So I'm wondering why MongoDB would require any additional steps.Magus
Ok I solved the problem, thanks everyone for the help anyway!Magus
M
4

I found the mistake (and as I suspected, it was a silly one).
The problem was in the yaml description of the deployment. As no port was specified in the mongod command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000).

So the solution is to either set the containerPort as the default port of mongodb, like so:

       command:
      - "mongod"
      - "--bind_ip"
      - "0.0.0.0"
      ports:
      - containerPort: 27017
        name: mongocontainer

Or to set the port of mongodb as the one of the containerPort, like so:

      command:
      - "mongod"
      - "--bind_ip"
      - "0.0.0.0"
      - "--port"
      - "5000"
      ports:
      - containerPort: 5000
        name: mongocontainer
Magus answered 26/10, 2018 at 8:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.