MISCONF Redis is configured to save RDB snapshots
Asked Answered
A

38

671

During writes to Redis ( SET foo bar ) I am getting the following error:

MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.

Basically I understand that the problem is that redis is not able to save data on the disk, but have no idea how to get rid of the problem.

Also the following question has the same problem, it is abandoned long time ago with no answers and most probably no attempts to solve the problem.

Aquilegia answered 25/10, 2013 at 4:23 Comment(6)
were you able to resolve this issue. If yes , could you please assist with the steps. Because placing the rdb file somewhere else wouldnt solve it i guess. I think im missing something hereSquarely
This error occurs due to starting the redis server in a directory where redis does not have permissions. I recommend reverting back to default settings after fixing the problem: See answer regarding a fix to this problem.Cammack
In addition to Govind Rai's answer: https://mcmap.net/q/63690/-misconf-redis-is-configured-to-save-rdb-snapshotsParing
@GovindRai I've already grant redis permission by change both group and owner to redis, but doesn't help!Metsky
just as a first quick check, make sure you have space left on diskFatling
You should start looking at Redis' logs: tail -f /PATH/TO/REDIS/LOGS/redis-server.log. Then, trigger any action which write over Redis to get more info. In my case I got this: Failed opening the RDB file dump.rdb (in server root dir /usr/local/bin) for saving: Permission denied. In fact, /usr/local/bin were under root:root so running sudo chown -R redis:redis /usr/local/bin/ solved all issues. This is just one of many possible reasons to get this error. Never set stop-writes-on-bgsave-error as no, it's a pretty BAD IDEA.Getraer
E
253

In case you encounter the error and some important data cannot be discarded on the running redis instance (problems with permissions for the rdb file or its directory incorrectly, or running out of disk space), you can always redirect the rdb file to be written somewhere else.

Using redis-cli, you can do something like this:

CONFIG SET dir /tmp/some/directory/other/than/var
CONFIG SET dbfilename temp.rdb

After this, you might want to execute a BGSAVE command to make sure that the data will be written to the rdb file. Make sure that when you execute INFO persistence, bgsave_in_progress is already 0 and rdb_last_bgsave_status is ok. After that, you can now start backing up the generated rdb file somewhere safe.

Obviously, you should also have the dbfilename and dir changes reflected on the actual config file that you're using afterwards.

Eger answered 30/10, 2013 at 3:41 Comment(8)
rdb_bgsave_in_progress:0 under PersistencePantheon
For some reason when I try any config set command, it like keeps loading forever.Individuate
For those unfortunate ones who are on Windows, me at the moment, and whoa are using the MSOpenTech version, you have to set directory path in the following style: dir C:/Temp/. Do a bgsave to verify that it works..Beau
127.0.0.1:6379> CONFIG SET dir /root/tool (error) ERR Changing directory: Permission deniedRenaldo
Check if the user running the redis process can read and write on the new directory. Also check if there are already existing dump.rdb files inside that directory.Eger
I have no errors when issuing "BGSAVE" with my current config. So, how do I kjnow why I got MISCONF in the first place?Hinduism
BGSAVE won't tell you right away if it failed or succeeded. You have to do INFO persistence repeatedly first and wait until rdb_bgsave_in_progress becomes 0. After that, check rdb_last_bgsave_status if it is ok.Eger
You should run sudo chown -R redis:redis /var/lib/redis considering that your redis dir was /var/lib/redis because it is due to permission issue and it is safe to give redis user permission of redis folder. Then you can run BGSAVE and INFO persistence for verifying which should work hopefully.Adjudge
D
725

Restart your redis server.

  • macOS (brew): brew services restart redis.
  • Linux: sudo service redis restart / sudo systemctl restart redis
  • Windows: Windows + R -> Type services.msc, Enter -> Search for Redis then click on restart.

I had this issue after upgrading redis with Brew (brew upgrade).

Once I restarted my laptop, it immediately worked.

Dalmatian answered 18/12, 2019 at 12:27 Comment(9)
If anyone is reading this I had the issue with Homebrew as well but nothing to do with the upgrade: I just needed to start the service with sudo: brew services stop redis; sudo brew services start redis.Acicular
Classic "have you tried turning it off and on again" answer that helpedHyposthenia
In my case(macOS), it works! I think Big Sur update cause it, so some brew system need to be updated onceDeterminism
I tried brew uninstall redis and brew install redis to reinstall but to no avail, but this worked!Brumaire
It also worked for me and I had the same issue while upgrading redis by brew brew upgrade. On my case, I upgraded my brew packages to fix this issue: #53829391Ariosto
This worked like a charm! In my case, I routinely brew-upgrade packages and "brew services restart redis" was the ticket. Cheers.Progenitive
or in my case: docker-compose restart cache did the trick (where cache is the name of your redis container)Unconformable
In addition to restarting I had to add a swap file to my server to increase the usable memory.Halette
just the brew services restart redis solved this for me immediately, thanx :3Zaidazailer
H
524

Using redis-cli, you can stop it trying to save the snapshot:

config set stop-writes-on-bgsave-error no

This is a quick workaround, but if you care about the data you are using it for, you should check to make sure why bgsave failed in first place.

Herbie answered 31/1, 2014 at 15:54 Comment(10)
this is a quick workaround but you should check to make sure why bgsave failed in first placeThermostatics
If you use redis mainly for caching and sessions, this is a must.Mitchel
Is this not dangerous? For example, NodeBB uses Redis as a data store.Corkwood
@LoveToCode config set stop-writes-on-bgsave-error yesTerpstra
Whenever i restart server i got the same issue again. Then i have to set it again. How can i make it permanent?Genevieve
@ZiaQamar - you need to save the changes in the config file to persistRealist
IMO definitely not the solution. You are just telling Redis to not log those errors. But the errors are still there...Dalmatian
find / -type f -size +100M helped me to find the large files, which eat-up all storage.Edris
@ZiaQamar in linux systems the config files are usually under /etc/redis/ open the config file in there and seek the same key stop-writes-on-bgsave-error and make the cahnges there.Catchweight
Kind of ironic the suggested solution to an error about persisting changes is a config change which isn't saved on the disk. Does everybody assume they will never restart the server? Does everyone have so good memories that they always remember every config change they've done when restarting it?Catchweight
E
253

In case you encounter the error and some important data cannot be discarded on the running redis instance (problems with permissions for the rdb file or its directory incorrectly, or running out of disk space), you can always redirect the rdb file to be written somewhere else.

Using redis-cli, you can do something like this:

CONFIG SET dir /tmp/some/directory/other/than/var
CONFIG SET dbfilename temp.rdb

After this, you might want to execute a BGSAVE command to make sure that the data will be written to the rdb file. Make sure that when you execute INFO persistence, bgsave_in_progress is already 0 and rdb_last_bgsave_status is ok. After that, you can now start backing up the generated rdb file somewhere safe.

Obviously, you should also have the dbfilename and dir changes reflected on the actual config file that you're using afterwards.

Eger answered 30/10, 2013 at 3:41 Comment(8)
rdb_bgsave_in_progress:0 under PersistencePantheon
For some reason when I try any config set command, it like keeps loading forever.Individuate
For those unfortunate ones who are on Windows, me at the moment, and whoa are using the MSOpenTech version, you have to set directory path in the following style: dir C:/Temp/. Do a bgsave to verify that it works..Beau
127.0.0.1:6379> CONFIG SET dir /root/tool (error) ERR Changing directory: Permission deniedRenaldo
Check if the user running the redis process can read and write on the new directory. Also check if there are already existing dump.rdb files inside that directory.Eger
I have no errors when issuing "BGSAVE" with my current config. So, how do I kjnow why I got MISCONF in the first place?Hinduism
BGSAVE won't tell you right away if it failed or succeeded. You have to do INFO persistence repeatedly first and wait until rdb_bgsave_in_progress becomes 0. After that, check rdb_last_bgsave_status if it is ok.Eger
You should run sudo chown -R redis:redis /var/lib/redis considering that your redis dir was /var/lib/redis because it is due to permission issue and it is safe to give redis user permission of redis folder. Then you can run BGSAVE and INFO persistence for verifying which should work hopefully.Adjudge
S
74

There might be errors during the bgsave process due to low memory. Try this (from redis background save FAQ)

echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1
Scott answered 13/12, 2013 at 22:37 Comment(1)
LInk: redis.io/topics/faq Search for this: "Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!"Dwightdwindle
F
66

This error occurs because of BGSAVE being failed. During BGSAVE, Redis forks a child process to save the data on disk. Although exact reason for failure of BGSAVE can be checked from logs (usually at /var/log/redis/redis-server.log on linux machines) but a lot of the times BGAVE fails because the fork can't allocate memory. Many times the fork fails to allocate memory (although the machine has enough RAM available) because of a conflicting optimization by the OS.

As can be read from Redis FAQ:

Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the overcommit_memory setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.

Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.

Redis doesn't need as much memory as the OS thinks it does to write to disk, so may pre-emptively fail the fork.

To Resolve this, you can:

Modify /etc/sysctl.conf and add:

vm.overcommit_memory=1

Then restart sysctl with:

On FreeBSD:

sudo /etc/rc.d/sysctl reload

On Linux:

sudo sysctl -p /etc/sysctl.conf
Folks answered 15/4, 2018 at 6:42 Comment(4)
Output of systemctl status redis revealed that there is a warning that suggested exactly changing the overcommit_memory=0 setting. Altering that indeed solved the problem for me.Agamemnon
So the tldr would be, with the default settings, if redis is using 10 gb of ram, you need to have 10gb of ram free for this child process to be able to execute?Dolce
@DanHastings - Yes. And setting overcommit_memory to 1 relaxes this requirement.Folks
@Bhindi: Thanks a lot - this worked for me perfectly !!! One quick question is does this affect any other sidekiq related process or anything important?Intuition
C
50

In my case, it was just the privileges that I needed to allow for Redis to accept the incoming request.

So I restarted the Redis service via Homebrew brew services stop redis and brew services start redis and run the Redis server locally redis-server. The command prompt asked me to allow the incoming request and it started working.

Confessional answered 16/5, 2022 at 10:23 Comment(1)
Thank you. This happened to me after an update via homebrew on a Mac.Lesterlesya
B
34

in case you are working on a linux machine, also recheck the file and folder permissions of the database.

The db and the path to it can be obtained via:

in redis-cli:

CONFIG GET dir

CONFIG GET dbfilename

and in the commandline ls -l. The permissions for the directory should be 755, and those for the file should be 644. Also, normally redis-server executes as the user redis, therefore its also nice to give the user redis the ownership of the folder by executing sudo chown -R redis:redis /path/to/rdb/folder. This has been elaborated in the answer here.

Brockington answered 13/7, 2014 at 18:26 Comment(1)
What permissions should they be?Lecompte
E
29

If you're running MacOS and have recently upgraded to Catalina, you may need to run brew services restart redis as suggested in this issue.

Evante answered 11/11, 2019 at 5:30 Comment(0)
A
26

Thanks everyone for checking the problem, apparently the error was produced during bgsave.

For me, typing config set stop-writes-on-bgsave-error no in a shell and restarting Redis solved the problem.

Aquilegia answered 25/10, 2013 at 4:45 Comment(5)
That didn't "solve the problem", it just ignored it.Whimsicality
Restarting RedisServer in Services.msc worked for me.Orometer
Whenever i restart server i got the same issue again. Then i have to set it again. How can i make it permanent?Genevieve
@ZiaQamar, you can set the property permanently in redis.conf, which most likely be at /etc/redis/redis.conf, set "stop-writes-on-bgsave-error no"Ales
IMO definitely not the solution. You are just telling Redis to not log those errors. But the errors are still there...Dalmatian
C
26

Start Redis Server in a directory where Redis has write permissions

The answers above will definitely solve your problem, but here's what's actually going on:

The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.

It seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.

To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.

To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.

CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE

(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).

You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.

CONFIG SET dir "./"
BGSAVE

That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.

Cammack answered 23/9, 2017 at 19:57 Comment(2)
Make sure you grant permission of the non-root user for the directory that the dump file will be store in. In my case, I have a user redis so I do: sudo chown redis:redis /var/lib/redisMaw
Very simple and helpful explanationSpongin
C
25

$ redis-cli

config set stop-writes-on-bgsave-error no

According to Redis documentation, this is recommended only if you don't have RDB snapshots enabled or if you don't care about data persistence in the snapshots.

"By default Redis will stop accepting writes if RDB snapshots are enabled (at least one save point) and the latest background save failed. This will make the user aware (in a hard way) that data is not persisting on disk properly, otherwise,strong text chances are that no one will notice and some disaster will happen."

What u should be doing is :

# redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
OK
127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
OK
127.0.0.1:6379> BGSAVE
Background saving started
127.0.0.1:6379>

Please Make sure /data/tmp has enough disk space.

Commando answered 4/3, 2021 at 10:33 Comment(1)
How do you check if/when the BGSAVE command is complete?Satyr
O
23

Had encountered this error and was able to figure out from log that the error is because of the disk space not being enough. All the data that was inserted in my case was not needed any longer. So I tried to FLUSHALL. Since redis-rdb-bgsave process was running, it was not allowing to FLUSH the data also. I followed below steps and was able to continue.

  1. Login to redis client
  2. Execute config set stop-writes-on-bgsave-error no
  3. Execute FLUSHALL (Data stored was not needed)
  4. Execute config set stop-writes-on-bgsave-error yes

The process redis-rdb-bgsave was no longer running after the above steps.

Oman answered 4/9, 2018 at 7:9 Comment(0)
F
12

I faced the similar issue, the main reason behind this was the memory(RAM) consumption by redis. My EC2 machine had 8GB RAM(arounf 7.4 available for consumption)

When my program was running the RAM usage went upto 7.2 GB leaving hardly ~100MB in RAM , this generally triggers the MISCONF Redis error ...

You can determine the RAM consumption using the htop command. Look for the Mem attribute after running htop command. If it shows high consumtion (like in my case it was 7.2GB/7.4GB) It's better to upgrade the instance's with larger Memory. In this scenario using config set stop-writes-on-bgsave-error no will be a disaster for the server and may result in disrupting other services running on the server(if any). So, it better to avoid the config command and UPGRADE YOUR REDIS MACHINE.

FYI: You may need to install htop to make this work : sudo apt-get install htop

One more solution to this can be some other RAM heavy service running on your system, check for other service running on your server/machine/instance and stop it if its not necessary. To check all the services running on your machine use service --status-all

And a suggestion for people directly pasting the config command , please do reasearch a bit and atleast warn the user before using such commands. And as @Rodrigo mentioned in his comment : "It does not look cool to ignore the errors."

---UPDATE---

YOu can also configure maxmemory and maxmemory-policy to define the behavior of Redis when a specific limit of memory is reached. For example, if I want to keep the memory limit of 6GB and delete the least recently used keys from the DB to make sure that redis mem usage do not exceed 6GB, then we can set these two parameters (in redis.conf or CONFIG SET command):

maxmemory 6gb
maxmemory-policy allkeys-lru

There are a lot of other values which you can set for these two parameters you can read about this from here: https://redis.io/topics/lru-cache

Fiertz answered 15/2, 2019 at 13:57 Comment(0)
P
11

Nowadays the Redis write-access problems that give this error message to the client re-emerged in the official redis docker containers.

Redis from the official redis image tries to write the .rdb file in the containers /data folder, which is rather unfortunate, as it is a root-owned folder and it is a non-persistent location too (data written there will disappear if your container/pod crashes).

So after an hour of inactivity, if you have run your redis container as a non-root user (e.g. docker run -u 1007 rather than default docker run -u 0), you will get a nicely detailed error msg in your server log (see docker logs redis):

1:M 29 Jun 2019 21:11:22.014 * 1 changes in 3600 seconds. Saving...
1:M 29 Jun 2019 21:11:22.015 * Background saving started by pid 499
499:C 29 Jun 2019 21:11:22.015 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: Permission denied
1:M 29 Jun 2019 21:11:22.115 # Background saving error

So what you need to do is to map container's /data folder to an external location (where the non-root user, here: 1007, has write access, such as /tmp on the host machine), e.g:

docker run --rm -d --name redis -p 6379:6379 -u 1007 -v /tmp:/data redis

So it is a misconfiguration of the official docker image (which should write to /tmp not /data) that produces this "time bomb" that you will most likely encounter only in production... overnight over some particularly quiet holiday weekend :/

Payson answered 30/6, 2019 at 8:52 Comment(5)
Just wanted to add a comment here, as this ultimately helped resolve issues I was facing with redis in Docker. Our UAT and Dev Docker servers are Windows. Windows Defender will identify RDB files as potential viruses. So mounting your /data directory will temporarily resolve the parent issue; until Windows Defender quarantines the file, causing another. MAKE SURE you add the mounted data directory as an exception in Windows Defender to resolve this.Kashmir
Reminds me: that Windows Defender alert may not necessarily be a false positive - a cryptominer can infect the official Redis image even when running it without root and with all capabilities dropped - it's enough to expose its port to the netPayson
Thanks, that's a good point. Just curious, but how would an RDB file execute on the host, especially a Windows one? I suppose it could be executing within the container itself. But that's not specific to this particular container.Kashmir
Right, the payload would probably fail to execute on Windows, unless written entirely in Lua and thus as cross-platform as Redis itself... eval command is a devil's invention, regardless of the languagePayson
This has been an enlightening experience; thanks very much. Apparently, our UAT/DEV compose files were exposing ports outside of the Docker network. I don't know how this is possible, but those instances were receiving admin commands and, indeed. were launching a crypto miner. I've disabled those ports, turned off the local RDB mount, and re-instated the Windows Defender exception (though, that won't matter with the mount off). I need to investigate HOW these commands were getting through our firewall, but I'm monitoring closelyKashmir
T
10

for me

config set stop-writes-on-bgsave-error no

and I reload my mac, it works

Terpsichore answered 15/8, 2019 at 2:33 Comment(0)
H
10

On redis.conf line ~235 let's try to change config like this

- stop-writes-on-bgsave-error yes
+ stop-writes-on-bgsave-error no
Heterocercal answered 27/11, 2020 at 12:14 Comment(1)
Note: on macOS this file is located at /usr/local/etc/redis.conf and you need to run this command to restart redis: brew services restart redisSpatter
B
9

A more permanent fix might be to look in /etc/redis/redis.conf around lines 200-250 there are settings for the rdb features, that were not a part of redis back in the 2.x days.

notably

dir ./

can be changed to

dir /home/someuser/redislogfiledirectory

or you could comment out all the save lines, and not worry about persistence. (See the comments in /etc/redis/redis.conf)

Also, don't forget

service redis-server stop
service redis-server start
Boomkin answered 19/9, 2016 at 4:26 Comment(1)
stopping and start did fixed it for me :)Wig
F
7

all of those answers do not explain the reason why the rdb save failed.


as my case, I checked the redis log and found:

14975:M 18 Jun 13:23:07.354 # Background saving terminated by signal 9

run the following command in terminal:

sudo egrep -i -r 'killed process' /var/log/

it display:

/var/log/kern.log.1:Jun 18 13:23:07 10-10-88-16 kernel: [28152358.208108] Killed process 28416 (redis-server) total-vm:7660204kB, anon-rss:2285492kB, file-rss:0kB

that is it! this process(redis save rdb) is killed by OOM killer

refers:

https://github.com/antirez/redis/issues/1886

Finding which process was killed by Linux OOM killer

Franza answered 19/6, 2017 at 3:54 Comment(0)
H
6

Yep, this happing because current use does not have the permission to modify the "dump.rdb".

So, instead of creating a new RDB file, You can also give permission to old file(change the ownership of it).

In redis-cli enter:

config get dir

you will get "/usr/local/var/db/redis" (this is the location where redis writes the data)

go to this location using terminal

cd 
cd /usr/local/var/db

Type this command(with our username):

sudo chown -R [username] db

This will change to owner.

This works for me.

Hookworm answered 11/6, 2021 at 5:58 Comment(0)
N
5

FWIW, I ran into this and the solution was to simply add a swapfile to the box. I used this method: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04

Nata answered 12/2, 2015 at 16:8 Comment(4)
How did you figure out the memory overflow was the issue? I might be having the same issue.Coquet
@Coquet I don't remember. If I had to guess I would say that maybe something in the logs were complaining about not being able to allocate memory. Sorry I can't be more helpful.Nata
In first place I thought also it will be great solution to work with swap and redis combined then I did some research and reached to this article antirez.com/news/52 , which claims it is wrong way of using redis, anyway I am not 100% agree with it, are you happy with the performance of using redis with swap?Carlettacarley
@Coquet In your Redis log you would see "Cannot allocate memory" errors. See here on how to see the log file: #16337607Dwightdwindle
K
5

I know this thread is slightly older, but here's what worked for me when I got this error earlier, knowing I was nowhere near memory limit- both answers were found above.

Hopefully this could help someone in the future if they need it.

  1. Checked CHMOD on dir folder... found somehow the symbolic notation was different. CHMOD dir folder to 755
  2. dbfilename permissions were good, no changes needed
  3. Restarted redis-server
  4. (Should've done this first, but ah well) Referenced the redis-server.log and found that the error was the result of access being denied.

Again- unsure how the permissions on the DIR folder got changed, but I'm assuming CHMOD back to 755 and restarting redis-server took care of it as I was able to ping redis server afterwards.

Also- to note, redis did have ownership of the dbfilename and DIR folder.

Kultur answered 21/6, 2020 at 6:14 Comment(0)
A
4

I too was facing the same issue. Both the answers (the most upvoted one and the accepted one) just give a temporary fix for the same.

Moreover, the config set stop-writes-on-bgsave-error no is a horrible way to over look this error, since what this option does is stop redis from notifying that writes have been stopped and to move on without writing the data in a snapshot. This is simply ignoring this error. Refer this

As for setting dir in config in redis-cli, once you restart the redis service, this shall get cleared too and the same error shall pop up again. The default value of dir in redis.conf is ./ , and if you start redis as root user, then ./ is / to which write permissions aren't granted, and hence the error.

The best way is to set the dir parameter in redis.conf file and set proper permissions to that directory. Most of the debian distributions shall have it in /etc/redis/redis.conf

Amaranth answered 11/6, 2018 at 9:3 Comment(0)
C
4

Check your Redis log before taking any action. Some of the solutions in this thread may erase your Redis data, so be careful about what you are doing.

In my case, the machine was running out of RAM. This also can happen when there is no more free disk space on the host.

Consulate answered 17/4, 2020 at 22:24 Comment(1)
I was running out of RAMHervey
R
4

After banging my head through so many SO questions finally - for me @Axel Advento' s answer worked but with few extra steps - I was still facing the permission issues.
I had to switch user to redis, create a new dir in it's home dir and then set it as redis's dir.

sudo su - redis -s /bin/bash
mkdir redis_dir
redis-cli CONFIG SET dir $(realpath redis_dir)
exit # to logout from redis user (optional)
Rudolfrudolfo answered 2/5, 2020 at 5:21 Comment(0)
A
4

In my Case the Ubuntu Virtual Machine's Disk space got Full and that's why I was getting this Error. After deleting some files from the Disk has Solved the Issue.

Amann answered 29/5, 2021 at 17:51 Comment(0)
L
3

In case you are using docker/docker-compose and want to prevent redis from writing to file, you can create a redis config and mount into a container

docker.compose.override.yml

  redis:¬
      volumes:¬
        - ./redis.conf:/usr/local/etc/redis/redis.conf¬
      ports:¬
        - 6379:6379¬

You can download the default config from here

in the redis.conf file make sure you comment out these 3 lines

save 900 1
save 300 10
save 60 10000

myou can view more solutions for removing the persistent data here

Licastro answered 23/6, 2019 at 2:5 Comment(0)
E
2

I hit this problem while working on a server with AFS disk space because my authentication token had expired, which yielded Permission Denied responses when the redis-server tried to save. I solved this by refreshing my token:

kinit USERNAME_HERE -l 30d && aklog

Extend answered 28/10, 2017 at 12:44 Comment(0)
M
2

In my case it happened because I just installed redis using the quick way. So redis is not running as root. I was able to solve this problem by following the instructions under the Installing Redis more properly section of their Quick Start Guide. After doing so, the problem was solved and redis is now running as root. Check it out.

Megillah answered 22/1, 2020 at 7:31 Comment(0)
S
0

If you are running Redis locally on a windows machine, try to "run as administrator" and see if it works. With me, the problem was that Redis was located in the "Program Files" folder, which restricts permissions by default. As it should.

However, do not automatically run Redis as an administrator You don't want to grant it more rights that it is supposed to have. You want to solve this by the book.

So, we have been able to quickly identify the problem by running it as an administrator, but this is not the cure. A likely scenario is that you have put Redis in a folder that doesn't have write rights and as a consequence the DB file is stored in that same location.

You can solve this by opening the redis.windows.conf and to search for the following configuration:

    # The working directory.
    #
    # The DB will be written inside this directory, with the filename specified
    # above using the 'dbfilename' configuration directive.
    #
    # The Append Only File will also be created inside this directory.
    #
    # Note that you must specify a directory here, not a file name.
    dir ./

Change dir ./ to a path you have regular read/write permissions for

You could also just move the Redis folder in it's entirety to a folder you know has the right permissions.

Skewbald answered 19/6, 2018 at 8:54 Comment(0)
A
0

In my case it was related to disk free space. (you can check it with df -h bash command) when I free some space this error disappeared.

Aila answered 6/8, 2019 at 6:40 Comment(0)
S
0

Please, be aware, that this error appears when your server is under attack. Just found that redis fails to write to '/etc/cron.d/web' where after correcting of permissions, new file consisting of mining algorithm with some hiding options was added.

Stevenstevena answered 2/6, 2020 at 11:0 Comment(0)
K
0
# on redis 6.0.4 
# if show error 'MISCONF Redis is configured to save RDB snapshots'
# Because redis doesn't have permissions to create dump.rdb file
sudo redis/bin/redis-server 
sudo redis/bin/redis-cli
Kismet answered 9/6, 2020 at 14:10 Comment(0)
S
0

check the permission in dir : var/lib/redis it should be redis:redis

Styliform answered 4/3, 2021 at 8:13 Comment(0)
M
0

I got the same issue. In my case it was due to full usage of instance memory as I've used a vm instance for redis and redis uses instance's RAM . so after increases the disk size of the instance the issue got resolved.

Mucoviscidosis answered 15/4, 2023 at 14:44 Comment(0)
S
0

As a windows user all i had to do was change the permissions to the file, i saved the file in a folder where it set only read permissions so i had to change it to allow my app to also write to it

Sagacity answered 9/5, 2023 at 23:10 Comment(0)
E
-2

As pointed out by @Chris the problem is likely to to low memory. We started experiencing it when we allocated too much RAM to MySQL (innodb_buffer_pool_size).

To ensure there's enough RAM for Redis and other services we reduced innodb_buffer_pool_size on MySQL.

Ewell answered 15/9, 2017 at 21:54 Comment(0)
C
-2

In my case, the reason was very low free space in disk (only 35 Mb). I did the following -

  1. Stopped all Redis related processe
  2. Delete some files in disk to make adequate free space
  3. Delete redis dump file (if existing data not needed)

    sudo rm /var/lib/redis/*

  4. Delete all the keys of all the existing databases

    sudo redis-cli flushall

  5. restart all celery tasks and check the corresponding logs for any issues
Contaminant answered 12/3, 2018 at 6:43 Comment(1)
You must have done this on your dev instance. Not right solution when dealing with data centric applications.Pittance
M
-2

you must chmod and chown the new folder

chown -R redis and chmod ...

Mccreery answered 8/7, 2019 at 9:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.