Nginx Ingress - 502 Bad Gateway - failed (111: Connection refused) while connecting to upstream - Kubernetes
Asked Answered
C

3

5

I am receiving the following error from the ingress-nginx-controller, which ultimately gives me the 502 Bad Gateway. I can port forward into my nodejs service without error.

2020/05/30 21:31:12 [error] 485#485: *18798 connect() failed (111: Connection refused) while connecting to upstream, client: 177.183.249.235, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.0.249:3000/favicon.ico", host: "167.172.15.8", referrer: "http://167.172.15.8/"

I am using Pulumi and the following is the defined service:

export const frontendService = new k8s.core.v1.Service(config.appNameFrontend, {
  metadata: {
    namespace: namespace.metadata.name,
    labels: {
      app: config.appNameFrontend,
      service: config.appNameFrontend
    }
  },
  spec: {
    ports: [{
      name: "http",
      port: 80,
      targetPort: 3000
    }],
    selector: {
      app: config.appNameFrontend
    }
  }
})
`

The deployment:

const frontendDeployment = new k8s.apps.v1.Deployment(`${config.appNameFrontend}-${config.version}`, {
  metadata: {
    namespace: namespace.metadata.name,
    labels: {
      app: config.appNameFrontend,
      version: config.version,
    },
  },
  spec: {
    replicas: 1,
    selector: {
      matchLabels: {
        app: config.appNameFrontend,
        version: config.version
      }
    },
    template: {
      metadata: {
        labels: {
          app: config.appNameFrontend,
          version: config.version
        }
      },
      spec: {
        serviceAccountName: frontendServiceAccount.metadata.name,
        containers: [{
          name: config.appNameFrontend,
          image: "registry.digitalocean.com/connecttv/connecttv-frontend:latest",
          imagePullPolicy: "Always",
          resources: {
            limits: {
              cpu: "1000m"
            },
            requests: {
              cpu: "100m"
            }
          },
          ports: [{
            containerPort: 3000
          }]
        }],
        restartPolicy: "Always",
        imagePullSecrets: [{
          name: pulumi.interpolate`${imagePullSecret.metadata.name}`
        }]
      }
    }
  }
})

and finally the ingress:

const frontendIngress = new k8s.networking.v1beta1.Ingress(`frontend-ingress-${config.projectName}-v1`, {
  metadata: {
    namespace: namespace.metadata.name,
    annotations: {
      "kubernetes.io/ingress.class": "nginx"
    }
  },
  spec: {
    rules: [
      {
        http: {
          paths: [
            {
              path: "/",
              backend: {
                serviceName: frontendService.metadata.name,
                servicePort: frontendService.spec.ports[0].port,
              },
            }
          ],
        },
      },
    ],
  },
});

Any ideas what would be causing this?

I used the 0.32.0 version to install nginx ingress (digitalocean).

However, i modified the deploy.yaml to externalTrafficPolicy: Cluster from local.

Clinquant answered 30/5, 2020 at 21:53 Comment(0)
C
7

Fixed. Always check your dockerfile! It was set to listen to localhost only!!!!

Clinquant answered 30/5, 2020 at 22:11 Comment(2)
Oh man this gave me the clue I needed to realize fastify only listens to 127.0.0.1 by default. 2 days wastedNonsmoker
Can you elaborate more on your solution? caused I also faced the same problemPolyhydric
R
1

Maybe this peace of information will help someone. In my case, the Dockerfile was building a Fastify app that was started as follow:

await app.listen(3000);

So I had to change it to:

await app.listen(3000, '0.0.0.0');

Source: https://www.fastify.io/docs/latest/Reference/Server/#listen

Raddatz answered 21/3, 2022 at 13:39 Comment(0)
S
0

I wanted to add my experience where I wanted to host nextjs as a k8s service using nginx and ingress on EKS. The problem led me to read the entire internet until I discovered the issue was the labels used on my services. This was my misunderstanding and I used the same ones across the whole deployment. Making them unique fixed this. The solution prior to the change produced 502 errors and 111: Connection refused). Refreshing sometimes worked. Hope this helps someone one day.

Sage answered 30/1, 2024 at 10:56 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.