Sticky sessions on Kubernetes cluster
Asked Answered
L

2

31

Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two load balancers: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-ingress
spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/*
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /*
            backend:
              serviceName: sample-frontend
              servicePort: 80

The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create sticky sessions, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.

To overtake this issue, I read that the Nginx-ingress manages to deal with this situation and I installed through the steps available here: https://kubernetes.github.io/ingress-nginx/deploy/ using Helm.

You can find below the diagram for the architecture I'm trying to build:

enter image description here

With the following services (I will just paste one of the services, the other one is similar):

apiVersion: v1
kind: Service
metadata:
  name: sample-backend
spec:
  selector:
    app: sample
    tier: backend
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
  type: LoadBalancer

And I declared the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-nginx-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
    nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/*
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /*
            backend:
              serviceName: sample-frontend
              servicePort: 80

After that, I run kubectl apply -f sample-nginx-ingress.yaml to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL. Am I doing anything wrong?

Edit 1

** Updated service and ingress configurations **

After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:

Nginx Ingress

The paths shouldn't contain the "", unlike the default Kubernetes ingress that is mandatory to have the "" to route the paths I want.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"

spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /
            backend:
              serviceName: sample-frontend
              servicePort: 80

Services

Also, the services shouldn't be of type "LoadBalancer" but "ClusterIP" as below:

apiVersion: v1
kind: Service
metadata:
  name: sample-backend
spec:
  selector:
    app: sample
    tier: backend
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
  type: ClusterIP

However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.

Lavelle answered 10/12, 2019 at 17:21 Comment(13)
What type is your Service? Is it LoadBalancer or NodePort?Hufford
They are Load Balancers.Lavelle
So you are accessing your services from the ip-address from endpoints exposed as GCP loadbalancer? ...this means that you are not using your Ingress ...you must have Service of type NodePort on GKE for this.Hufford
Or what do you mean with the "Endpoint" column? You should access your services via your Ingress Controller.Hufford
Can you please post your Service yaml?Schoenburg
Jonas, the ingress routes two paths: /rest/v1/* and /* . I'm trying to access the ingress IP, not the load balancer IP. In the "Services & Ingress" section on GCP you can see the column "Endpoints". I'm using the ingress endpoint. Am I doing it wrong?Lavelle
Dávid, I've just updated the question to include one of the Services (Load Balancer).Lavelle
I thought Service should be of type NodePort when using Ingress on GCP, but I am not sure.Hufford
Which version of Kubernetes are you using?Ginnygino
@DawidKruk I'm using these versions: Client Version: {Major:"1", Minor:"16"} Server Version: {Major:"1", Minor:"13+"}Lavelle
Please provide output from this command (it will show nginx controller version): kubectl exec -it $(kubectl get pods -l app=nginx-ingress,component=controller -o jsonpath='{.items[0].metadata.name}') -- /nginx-ingress-controller --versionGinnygino
@DawidKruk Here you have: ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.26.1 Build: git-2de5a893a Repository: github.com/kubernetes/ingress-nginx nginx version: openresty/1.15.8.2 -------------------------------------------------------------------------------Lavelle
@DawidKruk Any thought on how to achieve stickiness for inter service communication ? i.e. when ServiceA pod is trying to call serviceB pod ensure the same pod is picked up...? since these may not get routed through nginx ingress was wondering about it.Dispeople
G
33

I looked into this matter and I have found solution to your issue.

To achieve sticky session for both paths you will need two definitions of ingress.

I created example configuration to show you the whole process:

Steps to reproduce:

  • Apply Ingress definitions
  • Create deployments
  • Create services
  • Create Ingresses
  • Test

I assume that the cluster is provisioned and is working correctly.

Apply Ingress definitions

Follow this Ingress link to find if there are any needed prerequisites before installing Ingress controller on your infrastructure.

Apply below command to provide all the mandatory prerequisites:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Run below command to apply generic configuration to create a service:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

Create deployments

Below are 2 example deployments to respond to the Ingress traffic on specific services:

hello.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  selector:
    matchLabels:
      app: hello
      version: 1.0.0
  replicas: 5
  template:
    metadata:
      labels:
        app: hello
        version: 1.0.0
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:1.0"
        env:
        - name: "PORT"
          value: "50001"

Apply this first deployment configuration by invoking command:

$ kubectl apply -f hello.yaml

goodbye.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: goodbye
spec:
  selector:
    matchLabels:
      app: goodbye
      version: 2.0.0
  replicas: 5
  template:
    metadata:
      labels:
        app: goodbye
        version: 2.0.0
    spec:
      containers:
      - name: goodbye 
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "50001"

Apply this second deployment configuration by invoking command:

$ kubectl apply -f goodbye.yaml

Check if deployments configured pods correctly:

$ kubectl get deployments

It should show something like that:

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
goodbye   5/5     5            5           2m19s
hello     5/5     5            5           4m57s

Create services

To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:

hello-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  type: NodePort
  selector:
    app: hello
    version: 1.0.0
  ports:
  - name: hello-port
    protocol: TCP
    port: 50001
    targetPort: 50001

Apply first service configuration by invoking command:

$ kubectl apply -f hello-service.yaml

goodbye-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: goodbye-service
spec:
  type: NodePort
  selector:
    app: goodbye
    version: 2.0.0
  ports:
  - name: goodbye-port
    protocol: TCP
    port: 50001
    targetPort: 50001

Apply second service configuration by invoking command:

$ kubectl apply -f goodbye-service.yaml

Take in mind that in both configuration lays type: NodePort

Check if services were created successfully:

$ kubectl get services

Output should look like that:

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE
goodbye-service   NodePort    10.0.5.131   <none>        50001:32210/TCP   3s
hello-service     NodePort    10.0.8.13    <none>        50001:32118/TCP   8s

Create Ingresses

To achieve sticky sessions you will need to create 2 ingress definitions.

Definitions are provided below:

hello-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
  rules:
  - host: DOMAIN.NAME
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-service
          servicePort: hello-port

goodbye-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: goodbye-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
  rules:
  - host: DOMAIN.NAME
    http:
      paths:
      - path: /v2/
        backend:
          serviceName: goodbye-service
          servicePort: goodbye-port

Please change DOMAIN.NAME in both ingresses to appropriate to your case. I would advise to look on this Ingress Sticky session link. Both Ingresses are configured to HTTP only traffic.

Apply both of them invoking command:

$ kubectl apply -f hello-ingress.yaml

$ kubectl apply -f goodbye-ingress.yaml

Check if both configurations were applied:

$ kubectl get ingress

Output should be something like this:

NAME              HOSTS        ADDRESS          PORTS   AGE
goodbye-ingress   DOMAIN.NAME   IP_ADDRESS      80      26m
hello-ingress     DOMAIN.NAME   IP_ADDRESS      80      26m

Test

Open your browser and go to http://DOMAIN.NAME Output should be like this:

Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb

Hostname: hello-549db57dfd-4h8fb is the name of the pod. Refresh it a couple of times.

It should stay the same.

To check if another route is working go to http://DOMAIN.NAME/v2/ Output should be like this:

Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg

Hostname: goodbye-7b5798f754-pbkbg is the name of the pod. Refresh it a couple of times.

It should stay the same.

To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing.

Cookies

Ginnygino answered 16/12, 2019 at 16:14 Comment(11)
Thank you for your answer. I tried your solution in my cluster, however, I didn't understand the purpose of DOMAIN.NAME here. I suppose that the DOMAIN.NAME is mandatory for Nginx Ingresses. So, I put a default URL like 'stickyingress.example.com', the same as the example. However, I can't connect to stickyingress.example.com in the browser, but I'm able to connect to the ingress URL and it is redirected to the frontend app as expected, but I'm getting 404 in the backend app (equivalent to your goodbye service). I think I've misunderstood the meaning of the host parameter here.Lavelle
The 'DOMAIN.NAME' is for Ingress to know where to route the traffic. Let me elaborate. It should be in form acme.com not in form like acme.com/something. The /something should be in path So both ingresses should have the same DOMAIN.NAME in that case. Entering the IP address in browser will not send appropriate message to the Ingress and it won't work. For example I tried to connect to Ingress with IP address and it said error 404 but connecting by DOMAIN.NAME works properly.Ginnygino
Ok, got it. Since my solution is deployed on Google Cloud, should I configure the Google Cloud DNS and add an entry in DNS to map between the CNAME "DOMAIN.NAME" and the Nginx ingress controller IP? I did this but the browser still can't resolve the name.Lavelle
What I did was I created type A record with domain name and ip address of my ingress resource kubectl get ing. This command should show your IP.Ginnygino
@Lavelle Let me know if you managed to make it work.Ginnygino
I did what you said, using the Cloud DNS, I created a type A record and a new zone but the browser still doesn't recognize the name. I have to buy a domain from google domains and associate the domain to the ingress IP, right?Lavelle
Thanks @davidkruk, it worked, I have to fix some problems with my backend app but I'm now able to deal with the nginx ingress annotations to create sticky sessions.Lavelle
@Lavelle Yes. You will need to get the domain.Ginnygino
@DavidKruk, Thanks! One last question: why do we have to create two ingresses instead of just one? In fact, with two ingresses I can achieve sticky sessions but with only one that routes to two paths, the sticky sessions are not achieved.Lavelle
@Lavelle couldn't manage to create a sticky session with 1 ingress with configuration of: / and /v2 path. That's why I opted for 2 ingresses.Ginnygino
Does it only work when service is of type Nodeport or works with clusterip too?Sadomasochism
S
0

I think your Service configuration is wrong. Just remove type: LoadBalancer and the type will be ClusterIP by default.

LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. See more here: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer.

apiVersion: v1
kind: Service
metadata:
  name: sample-backend
spec:
  selector:
    app: sample
    tier: backend
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
Scotism answered 11/12, 2019 at 12:49 Comment(5)
Unfortunately, it didn't solve my problem. I am getting the same problem: cannot connect to the ingress URL. Actually, removing the ingress annotations is enough to successfully connect to the ingress URL, but my problem relies on the session affinity. I've also tried to spec sessionAffinity: "ClientIP" in my services but cannot solve the problem of routing the request to the same pod.Lavelle
Hmm, hard to say what's wrong then. nginx.ingress.kubernetes.io/session-cookie-hash where did you take this? kubernetes.github.io/ingress-nginx/examples/affinity/cookie doesn't mention it.Schoenburg
You are right @Dávid, but I've seen in some blog posts and in StackOverflow answers this annotation, but I'll remove it since it is not mentioned in the official documentation. Thanks for your help!Lavelle
Have you checked out this troubleshooting site: kubernetes.github.io/ingress-nginx/troubleshooting? Check out the logs: kubectl get pods -n <namespace-of-ingress-controller> (to get the name of the pod) and then kubectl logs -n <namespace> nginx-ingress-controller-67956bf89d-fv58j. You can also try to increse the log level: kubectl edit deploy -n <namespace-of-ingress-controller> nginx-ingress-controller and # Add --v=X to "- args", where X is an integer (set X to 5 for debug mode).Schoenburg
Thank you @Dávid, actually, the debug mode was important to fix some errors in my ingress configuration. I will edit the question to provide the fixes.Lavelle

© 2022 - 2024 — McMap. All rights reserved.