I am running into an issue "me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level." when I have RF=1, Read Consistency Level = 1 and one of the nodes in 6 node ring/ cluster is down. All of my reads are failing with this exception. Any idea? Ideally only reads that are looking for data in the node which is down should fail and all other reads should be successful?
There could be a few possibilities:
- You're running a multi-row query (get_range, get_indexed_slices, multiget, or the cql equivalents) that requires multiple nodes to be up
- Your cluster is unbalanced, with the down node owning most of the ring; a bad multi-dc configuration could also produce something similar
- Your cluster wasn't in a good state to begin with, where some nodes don't see others. Make sure nodetool ring shows the same output when run against each node in the cluster
If none of those are the cause, double check that you're specifying the consistency level correctly with Hector and cqlsh.
I've seen something similar when I mis-configured my replication settings, specifically I had the wrong datacenters named om the replication strategy. Double check what your DCs are (assuming you're using NetworkTopologyStrategy).
If you don't already know your DC names, in a shell on one of the nodes run:
$ nodetool -h localhost ring
Address DC Rack Status State Load Owns Token
141784319550391000000000000000000000000
172.26.233.135 Cassandra rack1 Up Normal 25.75 MB 16.67% 0
172.26.233.136 Cassandra rack1 Up Normal 26.03 MB 16.67% 28356863910078200000000000000000000000
172.26.233.137 Cassandra rack1 Up Normal 27.19 MB 16.67% 56713727820156400000000000000000000000
172.26.233.138 Cassandra rack1 Up Normal 26.78 MB 16.67% 85070591730234600000000000000000000000
172.26.233.139 Solr rack1 Up Normal 24.47 MB 16.67% 113427455640313000000000000000000000000
172.26.233.140 Solr rack1 Up Normal 26.66 MB 16.67% 141784319550391000000000000000000000000
You can see we have two DCs, Cassandra and Solr (this is a DSE cluster).
In cassandra-cli:
use Keyspace1;
describe;
CLI will print the strategy options:
Keyspace: Catalog:
Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
Durable Writes: true
Options: [DC1:3]
...
We have a mis-match. Cassandra is looking for a datacenter named DC1 hence the UnavailableException. We need to update the replication options to match the actual DCs in our cluster. In CLI, update the strategy options for your keyspace using the data center names:
update keyspace Keyspace1 with strategy_options = {Cassandra:3,Solr:2};
© 2022 - 2024 — McMap. All rights reserved.