Nginx proxy on kubernetes
Asked Answered
B

1

3

I have a nginx deployment in k8s cluster which proxies my api/ calls like this:

server {
  listen 80;

  location / {
    root /usr/share/nginx/html;
    index index.html index.htm;
    try_files $uri $uri/ /index.html =404;
  }

  location /api {
    proxy_pass http://backend-dev/api;
  }
}

This works most of the time, however sometimes when api pods aren't ready, nginx fails with error:

nginx: [emerg] host not found in upstream "backend-dev" in /etc/nginx/conf.d/default.conf:12

After couple of hours exploring internets, I found the article which pretty much the same issue. I've tried this:

  location /api {
    set $upstreamName backend-dev;
    proxy_pass http://$upstreamName/api;
  }

Now nginx returns 502. And this:

  location /api {
    resolver 10.0.0.10 valid=10s;
    set $upstreamName backend-dev;
    proxy_pass http://$upstreamName/api;
  }

Nginx returns 503.

What's the correct way to fix it on k8s?

Bah answered 17/7, 2019 at 11:28 Comment(0)
N
3

If your API pods are not ready, Nginx wouldn't be able to route traffic to them.

From Kubernetes documentation:

The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

If you are not using liveness or readiness probes, then your pod will be marked as "ready" even if your application running inside the container has not finished it's startup process and is ready to accept traffic.

The relevant section regarding Pods and DNS records can be found here

Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster-domain.example), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.

UPDATE: I would suggest using NGINX as an ingress controller.

When you use NGINX as an ingress controller, the NGINX service starts successfully and whenever an ingress rule is deployed, the NGINX configuration is reloaded on the fly.

This will help you avoid NGINX pod restarts.

Nemato answered 17/7, 2019 at 20:54 Comment(8)
That makes sense, but is it really the reason why nginx fails to resolve DNS name?Bah
Yes, Nginx tries to forward the traffic received from the /api path to the service named backend-dev using kubernetes internal DNS service. If there are no Pods backing that service then the DNS service won't be able to resolve anything. The Kubernetes Service documentation has very detailed information about this.Nemato
Can not really find that part. Could you please paste the exact quote?Bah
So, I successfully set up readinessProbe for my backend-dev service. Despite that, the NGINX container, which routes traffic to backend-dev keeps crashing several times with same nginx: [emerg] host not found in upstream "backend-dev" in /etc/nginx/conf.d/default.conf:12, while its getting ready, How to fix this?Bah
AFAIK this is the expected behavior since your service is not ready at that time and there is nothing to handle the incoming traffic. Are you using NGINX as an ingress controller or are you deploying NGINX as a service? What is your expected behavior?Nemato
I'd prefer do not get pods restarted. But If its able to resolve backend-dev after all, that really doesn't matter. For now I'm deploying NGINX as load balancer service (on the way to switch to ingress). Probably this approach fixes 503 error I previously got. I will keep you posted.Bah
If you don't go the ingress path this post may help you.Nemato
Let us continue this discussion in chat.Bah

© 2022 - 2024 — McMap. All rights reserved.