Redis Cross Slot error
Asked Answered
W

5

28

I am trying to insert multiple key/values at once on Redis (some values are sets, some are hashes) and I get this error: ERR CROSSSLOT Keys in request don't hash to the same slot.

I'm not doing this from redis-cli but from some Go code that needs to write multiple key/values to a redis cluster. I see other places in the code where multiple key values are done this way and I don't understand why mine don't work. What are the hash requirements to not have this error?

Thanks

Wilonah answered 26/6, 2016 at 19:57 Comment(0)
T
26

In a cluster topology, the keyspace is divided into hash slots. Different nodes will hold a subset of hash slots.

Multiple keys operations, transactions, or Lua scripts involving multiple keys are allowed only if all the keys involved are in hash slots belonging to the same node.

Redis Cluster implements all the single key commands available in the non-distributed version of Redis. Commands performing complex multi-key operations like Set type unions or intersections are implemented as well as long as the keys all belong to the same node.

You can force the keys to belong to the same node by using Hash Tags

Towhaired answered 26/6, 2016 at 20:30 Comment(4)
thanks, I didnt know about the {} pattern, this is why it works on the other part of the codeWilonah
hi, i m using resid for job queue processing using nodejs kue module, my problem is, its working with localthost connection, and now i made my redis dabase in aws elastic cache, for live use, when i connect to live redis and run the jobs its showing me this error ReplyError: CROSSSLOT Keys in request don't hash to the same slot, plase help meLeitmotiv
It solved my error, thank you and here is helpful link as well aws.amazon.com/premiumsupport/knowledge-center/…Chromoplast
Actually it is about the SAME keyslot. Same node is just a consequence of this.Janson
C
9

ERR CROSSSLOT Keys in request don't hash to the same slot

As the error message suggests, only if all of the keys belong to same slot will the operation succeed. Otherwise, you will see this failure message. This error would be seen even though both/all slots belongs to the same node. The check is very strict and, as per the code, all keys should hash to same slot.

Cudgel answered 12/4, 2019 at 12:44 Comment(0)
G
1

I got this error when I created a Redis cluster via AWS. The issue was fixed when I disabled cluster mode and ran the Redis with one single node

Gamba answered 22/9, 2023 at 14:56 Comment(1)
Thank you! I got this error too with the Redis Cache Plug-in using AWS EC Redis instance. When I created a standalone instance, it worked, thanks!Wrapping
D
0

In case you are facing this issue while using Celery with ElastiCache Redis in Cluster mode, then here maybe the answer you looking for:

So far, the Celery doesn't support for ElastiCache Redis in Cluster mode, so either disable it or change to some other supported message broker such as RabbitMQ or AWS SQS.
And because in the production environment, Cluster mode is highly recommended because it could keep the High Availability for the site, so it better to not turn it off but switch to other message brokers.

I've tried to make it work by customizing Celery worker based on this package, but not working:
https://github.com/hbasria/celery-redis-cluster-backend

Here's the repo that I've tried to apply the above package:
https://github.com/congson95dev/celery-tutorial/tree/celery-redis-cluster-mode

It end up with dead end with these errors:
ResponseError('CLUSTERDOWN The cluster is down') or redis.exceptions.ResponseError: MOVED 192.168.80.7:6379

=> So to conclude, please change to some other supported message broker such as RabbitMQ or AWS SQS
Here's the example of how to setup Celery using AWS SQS:
https://github.com/congson95dev/celery-tutorial/tree/celery-sqs

Delaney answered 2/5 at 9:9 Comment(2)
Please don't post the same answer at multiple questionsPeewee
Sorry, i just though it would be better if people find it easily when searching in multiple questions.Delaney
P
0

Instead of doing a DEL with multiple keys, do multiple delete commands in parallel:

ctx := context.TODO()

cmds, err := redisClient.Pipelined(ctx, func(pipe redis.Pipeliner) error {
    for _, key := range keys {
        pipe.Del(ctx, key)
    }
    return nil
})

if err != nil {
    return 0, err
}

result := int64(0)
for i, cmd := range cmds {
    deleted, err := cmd.(*redis.IntCmd).Result()
    if err != nil {
        fmt.Printf("DEL::%v::%v", keys[i], err)
        continue
    }

    result += deleted
}
Perpetuate answered 13/7 at 18:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.