Cluster has already maximum shards open
Asked Answered
B

6

34

I'm using Windows 10 and I'm getting

Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;]

How can I resolve this? I don't mind if I lose data since it only runs locally.

Bruckner answered 9/6, 2020 at 13:46 Comment(1)
Delete some of your oldest indices or add one more nodeHoad
B
30

Aside from the answers mentioned above, you can also try increasing the shards until you try to rearchitect the nodes

curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'

Besides, the following can be useful the should be proceeded with CAUTION ofcourse

  • Get total number of unassigned shards in cluster
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cluster/health\?pretty | grep unassigned_shards

USE WITH CAUTION

  • To DELETE the unassigned shards in a cluster (USE WITH CAUTION)
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} #(USE WITH CAUTION) | xargs -i curl  -XDELETE -u elasticuser:yourpassword "http://localhost:9200/{}"
Burgonet answered 11/3, 2021 at 6:56 Comment(1)
Do not run the delete unassigned shards command : it deletes the indices, not the shards.Henkel
L
13

You are reaching the limit cluster.max_shards_per_node. Add more data node or reduce the number of shards in cluster.

Letter answered 9/6, 2020 at 13:53 Comment(4)
How can I do it?Bruckner
Try setting it dynamically on a running Elasticsearch cluster with following curl command : curl -XPUT $CLUSTER_URL/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":5100}}'Letter
@Amitkumar : Reduce the number of shards? Shouldn't it be Increased ?Lynea
@Lynea Amit probably meant deleting some of the existing indexes that maybe are not needed anymoreInodorous
R
13

If you don't mind about the data loss, delete old indicies. The easy way is do it from the GUI Kibana > Management > DevTools, then to get all indicies:

GET /_cat/indices/

You can delete within a pattern like below:

DELETE /<index-name>

e.g.:

DELETE /logstash-2020-10*
Revulsion answered 21/10, 2021 at 9:42 Comment(0)
B
3

You probably have too many shards per node.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

Badmouth answered 21/12, 2020 at 9:48 Comment(0)
I
1

I just had the same error, "maximum shards open", so closing indices also work, no need to delete them if you want to keep them for later. Closed indices cannot be queried but data remains.

Invert answered 2/5, 2023 at 17:4 Comment(0)
I
1

My 2 cents on the matter: this action would add [x] total shards, but this cluster currently has [x]/[x] maximum shards open;

When adding or searching data within an index, that index is in an open state, the longer you keep the indices open the more shards you use

  • Each node can accommodate a number of shards, check how many shards a node can accommodate "max_shards_per_node" setting:
GET /_cluster/settings?include_defaults=true

Depending on the no of data nodes, you're going to have a max number of shards for your cluster, to check this just: GET _cluster/health and check the active_shards value (which represents the total number of active shards, including primary and replica shards).

Potential solutions to reduce the no of shards:

  • Reduce the no of replicas (a redundant copy of the data) for index:
PUT /<index>/_settings
{
  "index" : {
    "number_of_replicas" : 0
  }
}

# check index setting "number_of_replicas" afterward
GET <index>/_settings
  • Close a specific index (once an index is closed, you cannot add data to it or search for any data within the index)
POST /<index>/_close
  • Add another data node to the cluster.

Last but not least:

Shard size matters because it impacts both search latency and write performance, too many small shards will exhaust the memory (JVM Heap) too few large shards prevent OpenSearch from properly distributing requests, a good rule of thumb is to keep shard size between 10–50 GB.

Isola answered 8/4 at 13:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.