Openshift Route is not load balancing from Service pods
Asked Answered
F

4

6

I have tried before on Openshift Origin 3.9 and Online. I have deployed a simple hello world php app on Openshift. It has a Service and a Route.

When I call the route, I am getting expected output with Hello world and the Pod IP. Let's call this pod ip as 1.1.1.1

Now i deployed same app with small text change with same label under same Service. Let's call this pod ip as 2.2.2.2

I can see both pods running in a single Service. Now when I call the route, it always shows Podip 1.1.1.1 My route never hits the second pod.

My understand is Route will call the Service and Service will load balance between available pods.

But it isn't happening. Any help is appreciated.

Foxworth answered 6/2, 2019 at 12:0 Comment(0)
P
12

The default behavior of the HAProxy router is to use a cookie to ensure "sticky" routing. This enables sessions to remain with the same pod. https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html

If you set a haproxy.router.openshift.io/disable_cookies annotation on the route to true it should disable this behavior.

Pentavalent answered 6/2, 2019 at 12:9 Comment(0)
C
8

For those who came here looking for solution; Both answers by Daein Park and Will Gordon are true.

Here is a simple catch:

  1. If you are calling your pod externally it goes from Router to Service to Pod. If haproxy.router.openshift.io/disable_cookies annotation is not set to true on the Router, service always forwards to the same pod.

    Also after disabling sticky routing with annotation above you can select a loadbalancing algorithm with: haproxy.router.openshift.io/balance as key and one of [source,roundrobin,leastconn] as value

  2. If you are calling your pod internally from another pod. It goes from Service to Pod. Service does the round robin loadbalancing just fine with default configuration.

So you should:

  • Add the said annotation to your router if you want your service exposed by a router.
  • Do nothing if you want your service to be accessed only internally

(Tested on Openshift 4.2.28)

Chiffon answered 22/4, 2020 at 7:41 Comment(1)
I have additional question: You have written like: " Router to Service to Pod". Assuming we have two pods of some application and one service (ClusterIP) - Router with this settings still will hit using round robin strategy like: "Router to service to Pod-1", and "Router to service to Pod-2" equally ?Jinx
W
3

Services does NOT load balance between pods, it's completely random. This has been confirmed to us by the RedHat support. Even more, the answers above only do tests with differents calls on curl.

If you do subsequent calls on the same curl you will see that is reusing the connections. Just try:

curl http://172.30.177.72:8080/index.html http://172.30.177.72:8080/index.html

Instead of doing an interation and you will see that the keep-alive will reuse the connection and you end on the same pod every time

Whipstock answered 3/12, 2020 at 10:26 Comment(0)
C
2

My understand is Route will call the Service and Service will load balance between available pods.

Typically your knowledge is right. Let's test it on your env as follows.

# oc describe svc web
Name:              web
Namespace:         test
Labels:            app=web
Annotations:       openshift.io/generated-by=OpenShiftNewApp
Selector:          app=web,deploymentconfig=web
Type:              ClusterIP
IP:                172.30.6.8
Port:              8080-tcp  8080/TCP
TargetPort:        8080/TCP
Endpoints:         1.1.1.1:8080,2.2.2.2:8080
Session Affinity:  None
Events:            <none>

Session Affinity is None as default value, it means round robin for requests.

You can check the requests access as round robin manner by looping curl with monitoring the pods using oc logs or index.html response body (if the contents is different).

while :; do curl http://172.30.177.72:8080/index.html; sleep 1;  done
1.1.1.1:8080
2.2.2.2:8080
1.1.1.1:8080 
2.2.2.2:8080
...
Cervantez answered 6/2, 2019 at 14:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.