I was using Helm and didn't want to go through reinstalling it, so I worked around this issue by modifying the configmap that Helm generates to contain the config.
CONFIGMAP=<<value of common.names.fullname>>-configuration
kubectl edit cm $CONFIGMAP
You should see something like:
master.conf: |-
dir /data
# User-supplied master configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of master configuration
redis.conf: |-
# User-supplied common configuration:
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# End of common configuration
replica.conf: |-
dir /data
slave-read-only yes
# User-supplied replica configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of replica configuration
Remove lines beginning with rename-command
so it looks more like this:
master.conf: |-
dir /data
# User-supplied master configuration:
# End of master configuration
redis.conf: |-
# User-supplied common configuration:
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# End of common configuration
replica.conf: |-
dir /data
slave-read-only yes
# User-supplied replica configuration:
# End of replica configuration
Restart the redis pods
kubectl delete pods $(kubectl get pods | grep redis | awk {'print $1'})
Now exec into the master pod and flush all
kubectl exec redis-master-0 -- redis-cli FLUSHALL
OK
Be aware that you will have to do this again if you reinstall your Helm release if you want to use FLUSHALL or FLUSHDB again.
Update: Although this works, when you go to reinstall your helm release the pods will go into crashloopbackoff because they will see in the history that the commands you ran don't exist, so you will have to go through this again to get the pods to run. Probably best to go with @camilo-sampedro's answer in this case.
make
and amake test
? There must be st wrong with your build. – Mont