Kubernetes Service not distributing the traffic evenly among pods
Asked Answered
H

2

12

I am using Kubernetes v1.20.10 baremetal installation. It has one master node and 3 worker nodes. The application simply served HTTP requests.

I am scaling the deployment based on the (HPA) Horizontal Pod Autoscaler and I noticed that the load is not getting evenly across pods. Only the first pod is getting 95% of the load and the other Pod is getting very low load.

I tried the answer mentioned here but did not work : Kubernetes service does not distribute requests between pods

Hales answered 27/8, 2021 at 16:55 Comment(9)
which cni is used ?Saransk
How do you generate the load? Are you using an ingress or LoadBalancer/NodePort service?Gravel
Please clarify your question: Are you saying that incoming requests are not being load balanced between pods, or that your pods are not being scheduled evenly between nodes? (The title says "not distributing the node" -- did you mean load?)Arguello
@Saransk , we are using calico CNI.Hales
@Thomas, We are using istio ingress.Hales
Please attach your yamls to the question. How did you test the load? How exactly set up your cluster? Did you use Minikube?Grissel
@MikołajGłodziak it is a baremetal vanilla kubernetes installation. Application has a simple REST API. We test the load using JMeter.Hales
Is your application PHP / python / Java (spring) ? Did you select "http keepalive" in JMeter?Gravel
@Thomas, Its a spring boot java applicationHales
G
11

Based on the information provided I assume that you are using http-keepalive which is a persistent tcp connection. A kubernetes service distributes load for each (new) tcp connection. If you have persistent connections, only the additional connections will be distributed which is the effect that you observe.

Try: Disable http keepalive or set the maximum keepalive time to something like 15 seconds, maximum requests to 50.

Gravel answered 30/8, 2021 at 14:0 Comment(0)
S
1

If the connection is long-lived, The client will use the same pod throughout the life cycle of the connection. Only the new connections will be distributed in a round-robin manner. If your connection is long-lived, you can handle load balancing on the client side or delegate the responsibility of load balancing to a reverse proxy like traefik ingress in order to distribute requests in a round-robin manner.

Sabella answered 23/2, 2023 at 18:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.