How to increase _cluster/settings/cluster.max_shards_per_node for AWS Elasticsearch Service
Asked Answered
A

1

8

I uses AWS Elasticsearch service version 7.1 and its built-it Kibana to manage application logs. New indexes are created daily by Logstash. My Logstash gets error about maximum shards limit reach from time to time and I have to delete old indexes for it to become working again.

I found from this document (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html) that I have an option to increase _cluster/settings/cluster.max_shards_per_node.

So I have tried that by put following command in Kibana Dev Tools

PUT /_cluster/settings
{
  "defaults" : {
      "cluster.max_shards_per_node": "2000"
  }
}

But I got this error

{
  "Message": "Your request: '/_cluster/settings' payload is not allowed."
}

Someone suggests that this error occurs when I try to update some settings that are not allowed by AWS, but this document (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_7_1) tells me that cluster.max_shards_per_node is one in the allowed list.

Please suggest how to update this settings.

Aright answered 2/9, 2020 at 7:49 Comment(1)
The documentation states that this setting is unbounded/unlimited. This is clearly not the case as we've experienced.Wordsmith
F
31

You're almost there, you need to rename defaults to persistent

PUT /_cluster/settings
{
  "persistent" : {
      "cluster.max_shards_per_node": "2000"
  }
}

Beware though, that the more shards you allow per node, the more resources each node will need and the worse the performance can get.

Feldspar answered 2/9, 2020 at 7:59 Comment(1)
sorry for late response, It works but I forgot to accept your answer.Aright

© 2022 - 2024 — McMap. All rights reserved.