There is a post about a Redis command to get all available keys, but I would like to do it with Python.
Any way to do this?
There is a post about a Redis command to get all available keys, but I would like to do it with Python.
Any way to do this?
Use scan_iter()
scan_iter()
is superior to keys()
for large numbers of keys because it gives you an iterator you can use rather than trying to load all the keys into memory.
I had a 1B records in my redis and I could never get enough memory to return all the keys at once.
SCANNING KEYS ONE-BY-ONE
Here is a python snippet using scan_iter()
to get all keys from the store matching a pattern and delete them one-by-one:
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
for key in r.scan_iter("user:*"):
# delete the key
r.delete(key)
SCANNING IN BATCHES
If you have a very large list of keys to scan - for example, larger than >100k keys - it will be more efficient to scan them in batches, like this:
import redis
from itertools import izip_longest
r = redis.StrictRedis(host='localhost', port=6379, db=0)
# iterate a list in batches of size n
def batcher(iterable, n):
args = [iter(iterable)] * n
return izip_longest(*args)
# in batches of 500 delete keys matching user:*
for keybatch in batcher(r.scan_iter('user:*'),500):
r.delete(*keybatch)
I benchmarked this script and found that using a batch size of 500 was 5 times faster than scanning keys one-by-one. I tested different batch sizes (3,50,500,1000,5000) and found that a batch size of 500 seems to be optimal.
Note that whether you use the scan_iter()
or keys()
method, the operation is not atomic and could fail part way through.
DEFINITELY AVOID USING XARGS ON THE COMMAND-LINE
I do not recommend this example I found repeated elsewhere. It will fail for unicode keys and is incredibly slow for even moderate numbers of keys:
redis-cli --raw keys "user:*"| xargs redis-cli del
In this example xargs creates a new redis-cli process for every key! that's bad.
I benchmarked this approach to be 4 times slower than the first python example where it deleted every key one-by-one and 20 times slower than deleting in batches of 500.
user:*
for? –
Smew count
argument of the scan_iter
method: redis-py-doc.readthedocs.io/en/master/… –
Dinh Yes, use keys()
from the StrictRedis module:
>>> import redis
>>> r = redis.StrictRedis(host=YOUR_HOST, port=YOUR_PORT, db=YOUR_DB)
>>> r.keys()
Giving a null pattern will fetch all of them. As per the page linked:
keys(pattern='*')
Returns a list of keys matching pattern
SCAN
command as it is now a preferred way to get all keys with O(1) time complexity of each request. (and O(N) for all of the requests) –
Cremator r.keys()
is quite slow when you are trying to match a pattern and not just returning all keys. Consider using scan
as suggested in the answer below –
Valrievalry scan()
option, then upvote the other answer. In fact, mine was the accepted one and I asked the OP to accept the other one. To me, downvoting this one per se doesn't really match the "this answer is not useful" thingie. –
Palecek import redis
r = redis.Redis("localhost", 6379)
for key in r.scan_iter():
print key
using Pyredis library
Available since 2.8.0.
Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..
I'd like to add some example code to go with Patrick's answer and others.
This shows results both using keys and the scan_iter technique.
And please note that Python3 uses zip_longest instead of izip_longest. The code below loops through all the keys and displays them. I set the batchsize as a variable to 12, to make the output smaller.
I wrote this to better understand how the batching of keys worked.
import redis
from itertools import zip_longest
\# connection/building of my redisObj omitted here
\# iterate a list in batches of size n
def batcher(iterable, n):
args = [iter(iterable)] * n
return zip_longest(*args)
result1 = redisObj.get("TestEN")
print(result1)
result2 = redisObj.get("TestES")
print(result2)
print("\n\nLoop through all keys:")
keys = redisObj.keys('*')
counter = 0
print("len(keys)=", len(keys))
for key in keys:
counter +=1
print (counter, "key=" +key, " value=" + redisObj.get(key))
print("\n\nLoop through all keys in batches (using itertools)")
\# in batches of 500 delete keys matching user:*
counter = 0
batch_counter = 0
print("Try scan_iter:")
for keybatch in batcher(redisObj.scan_iter('*'), 12):
batch_counter +=1
print(batch_counter, "keybatch=", keybatch)
for key in keybatch:
if key != None:
counter += 1
print(" ", counter, "key=" + key, " value=" + redisObj.get(key))
Example output:
Loop through all keys:
len(keys)= 2
1 key=TestES value=Ola Mundo
2 key=TestEN value=Hello World
Loop through all keys in batches (using itertools)
Try scan_iter:
1 keybatch= ('TestES', 'TestEN', None, None, None, None, None, None, None, None, None, None)
1 key=TestES value=Ola Mundo
2 key=TestEN value=Hello World
Note redis comamnds are single threaded, so doing a keys() can block other redis activity. See excellent post here that explains that in more detail: SCAN vs KEYS performance in Redis
An addition to the accepted answer above.
scan_iter
can be used with a count
parameter in order to tell redis to search through a number of keys during a single iteration. This can speed up keys fetching significantly, especially when used with matching pattern and on big key spaces.
Be careful tough when using very high values for the count since that may ruin the performance for other concurrent queries.
https://docs.keydb.dev/blog/2020/08/10/blog-post/ Here's an article with more details and some benchmarks.
I have improved on Patrick's and Neal's code and added export to csv:
import csv
import redis
from itertools import zip_longest
redisObj = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True)
searchStr = ""
# iterate a list in batches of size n
def batcher(iterable, n):
args = [iter(iterable)] * n
return zip_longest(*args)
with open('redis.csv', 'w', newline='') as csvfile:
fieldnames = ['key', 'value']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
print("\n\nLoop through all keys in batches (using itertools)")
counter = 0
batch_counter = 0
print("Try scan_iter:")
for keybatch in batcher(redisObj.scan_iter('*'), 500):
batch_counter +=1
#print(batch_counter, "keybatch=", keybatch)
for key in keybatch:
if key != None:
counter += 1
val = ""
if (searchStr in key):
valType = redisObj.type(key)
print(valType)
match valType:
case "string":
val = redisObj.get(key)
case "list":
valList = redisObj.lrange(key, 0, -1)
val = '\n'.join(valList)
case "set":
valList = redisObj.smembers(key)
val = '\n'.join(valList)
case "zset":
valDict = redisObj.zrange(key, 0, -1, False, True)
val = '\n'.join(['='.join(i) for i in valDict.items()])
case "hash":
valDict = redisObj.hgetall(key)
val = '\n'.join(['='.join(i) for i in valDict.items()])
case "stream":
val = ""
case _:
val = ""
print(" ", counter, "key=" + key, " value=" + val)
writer.writerow({'key': key, 'value': val})
© 2022 - 2024 — McMap. All rights reserved.