Redis master/slave replication on Kubernetes for ultra-low latency
Asked Answered
T

1

2

A graph is always better than the last sentences, so here is what I would like to do :

enter image description here

To sum up:

  • I want to have a Redis master instance outside (or inside, this is not relevant here) my K8S cluster
  • I want to have a Redis slave instance per node replicating the master instance
  • I want that when removing a node, the Redis slave pod gets unregistered from master
  • I want that when adding a node, a Redis slave pod is added to the node and registered to the master
  • I want all pods in one node to consume only the data of the local Redis slave (easy part I think)

Why do I want such an architecture?

  • I want to take advantage of Redis master/slave replication to avoid dealing with cache invalidation myself
  • I want to have ultra-low latency calls to Redis cache, so having one slave per node is the best I can get (calling on local host network)

Is it possible to automate such deployments, using Helm for instance? Are there domcumentation resources to make such an architecture with clean dynamic master/slave binding/unbinding?

And most of all, is this architecture a good idea for what I want to do? Is there any alternative that could be as fast?

Thermocouple answered 22/12, 2021 at 16:40 Comment(5)
What about using an additional in-memory cache layer? I know you mentioned you don't want to deal with cache invalidation etc., but depending on the use-case it could make sense. Most probably you have considered this already, but wanted to mention because scaling redis slaves together with app pods seemed too costly and also still means network overhead (even if over localhost).Orton
i have just come across Redis Client Side Caching which is introduced in Redis 6. thought could be of interest to you..Orton
that's sounds interesting but will kill us by (N*local cache memory usage) where N is number of PODs running.Gerardgerardo
@HarshManvar for now I did not have time to focus on this architecture (but I'm definitely still interested in it). What is planned for now is to use a more classic Redis architecture, but use Ristretto as an in-memory cache locally for our app.Thermocouple
Actually what @Orton said is not that bad for our usage, because I'm talking about small data amount with high speed access, so the cost would not be that high in these conditions.Thermocouple
G
2

i remember we had a discussion on this topic previously here, no worries adding more here.

Read more about the Redis helm chart : https://github.com/bitnami/charts/tree/master/bitnami/redis#choose-between-redis-helm-chart-and-redis-cluster-helm-chart

You should also be asking the question of how my application will be connecting to POD on same Node without using the service of Redis.

For that, you can use the `environment variables and expose them to application POD.

Something like :

env:
- name: HOST_IP
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP

It will give you the value of Node IP on which the POD is running, then you can use that IP to connect to DeamonSet (Redis slave if you are running).

You can read more at : https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/

Is it possible to automate such deployments, using Helm for instance?

Yes, you can write down your own Helm chart and deploy the generated YAML manifest.

And most of all, is this architecture a good idea for what I want to do? Is there any alternative that could be as fast?

If you think then it is a good idea, as per my consideration this could create the $$$ issue & higher cluster resources usage.

What if you are running the 200 nodes on each you will be running the slave of Redis ? Which might consume resources on each node and add cost to your infra.

OR

if you are planning for specific deployment

Your above suggestion is also good, but still, if you are planning to use the Redis with only Specific deployment you can use the sidecar pattern also and connect multiple Redis together using configuration.

apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    app: web
spec:
  ports:
  - port: 80
    name: redis
    targetPort: 5000
  selector:
    app: web
  type: LoadBalancer    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: web
  replicas: 3
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: redis
        image: redis
        ports:
          - containerPort: 6379
            name: redis
            protocol: TCP        
      - name: web-app
        image: web-app
        env:       
          - name: "REDIS_HOST"
            value: "localhost"
Gerardgerardo answered 22/12, 2021 at 19:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.