How to debug the error "OOM command not allowed when used memory > 'maxmemory'" in Redis?
Asked Answered
D

7

55

I'm getting "OOM command not allowed" when trying to set a key, maxmemory is set to 500M with maxmemory-policy "volatile-lru", I'm setting TTL for each key sent to redis.

INFO command returns : used_memory_human:809.22M

  1. If maxmemory is set to 500M, how did I reached 809M ?
  2. INFO command does not show any Keyspaces , how is it possible ?
  3. KEYS * returns "(empty list or set)" ,I've tried to change db number , still no keys found.

Here is info command output:

redis-cli -p 6380
redis 127.0.0.1:6380> info
# Server
redis_version:2.6.4
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 2.6.32-358.14.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:28291
run_id:229a2ee688bdbf677eaed24620102e7060725350
tcp_port:6380
uptime_in_seconds:1492488
uptime_in_days:17
lru_clock:1429357

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:848529904
used_memory_human:809.22M
used_memory_rss:863551488
used_memory_peak:848529192
used_memory_peak_human:809.22M
used_memory_lua:31744
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.0.0

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1375949883
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok

# Stats
total_connections_received:3
total_commands_processed:8
instantaneous_ops_per_sec:0
rejected_connections:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0

# Replication
role:master
connected_slaves:0

# CPU
used_cpu_sys:18577.25
used_cpu_user:1376055.38
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Keyspace
redis 127.0.0.1:6380>
Deceive answered 25/8, 2013 at 15:0 Comment(0)
G
12

Any chance you changed the number of databases? If you use a very large number then the initial memory usage may be high

Gisela answered 27/8, 2013 at 6:45 Comment(4)
yes, the number of databases was set to a high number, after returning to the default database value , the memory was released , and the error disappear, i am back on track , thanks a lot !Deceive
What do you mean 'number of databases'? Redis is only 1 database. You can't have more, unless you are talking about a different instance.Doublebank
@chloe: check out the SELECT command - redis.io/commands/SELECTPrichard
also CONFIG GET databases will show you how many databases your Redis instance has.Prichard
H
28

Redis' maxmemory volatile-lru policy can fail to free enough memory if the maxmemory limit is already used by the non-volatile keys.

Hemstitch answered 24/7, 2015 at 1:3 Comment(0)
G
12

Any chance you changed the number of databases? If you use a very large number then the initial memory usage may be high

Gisela answered 27/8, 2013 at 6:45 Comment(4)
yes, the number of databases was set to a high number, after returning to the default database value , the memory was released , and the error disappear, i am back on track , thanks a lot !Deceive
What do you mean 'number of databases'? Redis is only 1 database. You can't have more, unless you are talking about a different instance.Doublebank
@chloe: check out the SELECT command - redis.io/commands/SELECTPrichard
also CONFIG GET databases will show you how many databases your Redis instance has.Prichard
R
4

In our case, maxmemory was set to a high amount, then someone on the team changed it to a lower amount after data had already been stored.

Resort answered 14/3, 2016 at 14:41 Comment(0)
F
4

My problem was that old data wasn't being released and it caused the redis db to get jammed up quickly. in Python, I cleared the cache server by running

red = redis.StrictRedis(...)
red.flushdb()

And then, limted the ttl to 24h by saving the file with "ex":

red.set(<FILENAME>, png, ex=(60*60*24))
Farly answered 21/9, 2020 at 17:57 Comment(2)
WARNING! FLUSHDB deletes every key in a redis database. If you're simply looking to free up memory from previously deleted keys, do not use it. Docs: redis.io/commands/flushdbChickabiddy
You're right @HartleyBrody, in my case I didn't care but someone else might.Farly
T
2

Memory is controlled in the config. Thus, your instance limited as it says. You can either look in your redis.conf or from the CLI Tool issue "config get maxmemory" to get the limit.

If you manage this Redis instance, you'll need to consult and adjust the config file. Usually looked for in /etc/redis.conf or /etc/redis/redis.conf.

If you are using a Redis provider you will need to get with them about increasing your limit.

Towboat answered 25/8, 2013 at 18:26 Comment(0)
S
2

Verifying Memory Usage of Your Redis Instance

You can get more insights of the Redis memory on your Hypernode by running the command below.

redis-cli info | grep memory_human

# Memory
used_memory_human:331.51M
total_system_memory_human:5.83G
maxmemory_human:896.00M

Fix It

The quick fix will be to flush the Redis cache so there is plenty of Redis memory available again. You can do this by running

redis-cli flushall

To prevent this you can try using compression on the Redis data, but most of the time this will only temporary ban out the problem. After a while, when the Redis cache is completely filled up again, the errors will re-appear.

Check if Your Keys Have an Expire Set

You can inspect your keys with the command below

redis-cli info keyspace

# Keyspace
db0:keys=59253,expires=1117,avg_ttl=81268890
db1:keys=13608,expires=904,avg_ttl=82515590
db2:keys=144,expires=144,avg_ttl=199414742

This will give you some insights in the Redis databases you’ve configured, the keys and whether they have an expire or not. In the above example a huge amount of the Redis keys don’t have an expire set. This means that those keys won’t ever expire and be removed from the Redis cache to make place for new keys. As a result the cache will be at greater risk to reach its maximum.

However, to solve this issue there is only one real fix: Upgrade to a bigger node that has more memory.

here is the guide

Schwinn answered 22/6, 2023 at 13:15 Comment(1)
flushall resolved the issue for us and we will check if we need to upgrade to a bigger nodeFetish
D
1

TO debug this issue, need to check that what action you performed on the redis-cli manually or somewhere from the code.

  1. It might be possible you ran keys * and you have very less memory to accommodate memory consumed by this command. This leads to throttling to cache service.
  2. In code, your changes might impact key insertion and duplicate or unique data in the db and this leads to overall memory exceed in the system.
Dispute answered 18/9, 2020 at 15:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.