Kubernetes nginx ingress controller returns 504 error
Asked Answered
D

3

1

Our on-premise Kubernetes/Kubespray cluster has suddenly stopped routing traffic between the nginx-ingress and node port services. All external requests to the ingress endpoint return a "504 - gateway timeout" error.

How do I diagnose what has broken?

I've confirmed that the containers/pods are running, the node application has started and if I exec into the pod then I can run a local curl command and get a response from the app.

I've checked the logs on the ingress pods and traffic is arriving and nginx is trying to forward the traffic on to the service endpoint/node port but it is reporting an error.

I've also tried to curl directly to the node via the node port but I get no response.

I've looked at the ipvs configuration and the settings look valid (e.g. there are rules for the node to forward traffic on the node port the service endpoint address/port)

Decapolis answered 19/9, 2019 at 18:40 Comment(6)
Here lies the answer: I've also tried to curl directly to the node via the node port but I get no response Check your routing tables and pod configAvram
Have you checked if it happened after specific amount of time? For example your function takes more than 60 seconds to be completed. You can check ingress documentation: kubernetes.github.io/ingress-nginx/user-guide/… or scalescale.com/tips/nginx/504-gateway-time-out-using-nginx/#Vachell
@Avram - I checked the routing tables via ipvsadm and everything looks fine.Decapolis
@abielak - I don't think the problem is with the ingress controller. The logs show traffic being received by the ingress controller - it just can't forward the traffic on to the node.Decapolis
Could you provide yaml files (service, ingress, deployment)?Vachell
I had this same issue. My environment set up has a proxy in it. I had to add an environment variable for NO_PROXY with the domain that was being routed as proxy was intercepting the routingEquiprobable
D
3

We couldn't resolve this issue and, in the end, the only workaround was to uninstall and reinstall the cluster.

Decapolis answered 4/12, 2019 at 18:57 Comment(0)
U
3

I was getting this because the nginx ingress controller pod was running out of memory, I just increased the memory for the pod and it worked.

Unprovided answered 27/10, 2020 at 22:16 Comment(1)
This is the correct answer. @Unprovided Thanks for this. All please use this answer to fix this issue.Intracardiac
T
1

I was facing a similar issue and the simple fix was to increase the values for the K8S_CPU_LIMIT and K8S_MEMORY_LIMIT for the application pods running on the cluster.

Thenceforward answered 24/10, 2021 at 23:30 Comment(2)
That could be one of the possible reason, but not an answer to the question.Tartlet
If you have a new question, please ask it by clicking the Ask Question button. Include a link to this question if it helps provide context. - From ReviewErmelindaermengarde

© 2022 - 2024 — McMap. All rights reserved.