Redis sentinel failover configuration in docker swarm
Asked Answered
M

1

9

Description


I'm trying to create a Redis cluster in a docker swarm. I'm using the bitnami-redis-docker image for creating my containers. Going through the bitnami documentation they always suggest to use 1 master node as opposed to reading the Redis documentation which states that there should be at least 3 master nodes, which is why I'm confused as to which one is right. Given that all bitnami slave are by default read-only, if I setup only a single master in one of the swarm leader nodes, and if it fails I believe sentinel will try to promote a different slave redis instance as master, but given that it is read-only all write operations will fail. If I change that to make the master redis instance as global meaning that it will be created in all of the nodes available in the swarm, in this case do I require sentinel at all? Also if the below setup is a good one is there a reason to introduce a load balancer?

Setup


+------------------+ +------------------+ +------------------+ +------------------+
| Node-1           | | Node-2           | | Node-3           | | Node-4           |     
| Leader           | | Worker           | | Leader           | | Worker           |
+------------------+ +------------------+ +------------------+ +------------------+
|  M1              | | M2               | | M3               | | M4               |
|  R1              | | R2               | | R3               | | R4               |
|  S1              | | S2               | | S3               | | S4               |
|                  | |                  | |                  | |                  |
+------------------+ +------------------+ +------------------+ +------------------+

Legends -

  • Masters are called M1, M2, M3, ..., Mn
  • Slaves are called R1, R2, R3, ..., Rn (R stands for replica).
  • Sentinels are called S1, S2, S3, ..., Sn

Docker


version: '3'

services:
  redis-master:
    image: 'bitnami/redis:latest'
    ports:
      - '6379:6379'
    environment:
      - REDIS_REPLICATION_MODE=master
      - REDIS_PASSWORD=laSQL2019
      - REDIS_EXTRA_FLAGS=--maxmemory 100mb
    volumes:
      - 'redis-master-volume:/bitnami'
    deploy:
      mode: global

  redis-slave:
    image: 'bitnami/redis:latest'
    ports:
      - '6379'
    depends_on:
      - redis-master
    volumes:
      - 'redis-slave-volume:/bitnami'
    environment:
      - REDIS_REPLICATION_MODE=slave
      - REDIS_MASTER_HOST=redis-master
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=laSQL2019
      - REDIS_PASSWORD=laSQL2019
      - REDIS_EXTRA_FLAGS=--maxmemory 100mb
    deploy:
      mode: replicated
      replicas: 4
      
  redis-sentinel:
    image: 'bitnami/redis:latest'
    ports:
      - '16379'
    depends_on:
      - redis-master
      - redis-slave
    volumes:
      - 'redis-sentinel-volume:/bitnami'
    entrypoint: |
      bash -c 'bash -s <<EOF
      "/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
      port 16379
      dir /tmp
      sentinel monitor master-node redis-master 6379 2
      sentinel down-after-milliseconds master-node 5000
      sentinel parallel-syncs master-node 1
      sentinel failover-timeout master-node 5000
      sentinel auth-pass master-node laSQL2019
      sentinel announce-ip redis-sentinel
      sentinel announce-port 16379
      EOF"     
      "/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"    
      EOF'
    deploy:
      mode: global
                     
volumes:
  redis-master-volume:
    driver: local
  redis-slave-volume:
    driver: local
  redis-sentinel-volume:
    driver: local
Mohave answered 9/5, 2018 at 15:27 Comment(0)
G
3

The bitnami solution is a failover solution hence it has one master node

Sentinel is a HA solution i.e automatic failover. But it does not provide scalability in terms of distribution of data across multiple nodes. You would need to setup clustering if you want 'sharding' in addition to 'HA'.

Gamp answered 29/5, 2018 at 8:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.