How to check Elasticsearch cluster health?
Asked Answered
M

6

63

I tried to check it via

curl -XGET 'http://localhost:9200/_cluster/health'

but nothing happened. Seems it's waiting for something. The console did not come back. Had to kill it with CTRL+C.

I also tried to check for existing indices via

curl -XGET 'http://localhost:9200/_cat/indices?v'

Same behavior as above.

Murrell answered 8/12, 2014 at 18:40 Comment(6)
Looks like your cluster is dead? Is elasticsearch actually running?Kindless
Yep, curl -XGET localhost:9200 and curl -XGET localhost:9200/_status works fine.Murrell
I figured out that after I commented #network.publish_host: localhost and #network.host: localhost, it's working fine. Wtf?Murrell
Did you change these settings from defaults?Kindless
If you read documentation on these settings (elasticsearch.org/guide/en/elasticsearch/reference/current/…) it follows from there that you have to specify either resolvable hostname or an address. localhost is neither.Kindless
Try using a visual tool to understand what is going on, you can do so here elastichq.org/app/index.phpXylophone
S
84

To check on elasticsearch cluster health you need to use

curl localhost:9200/_cat/health

More on the cat APIs here.

I usually use elasticsearch-head plugin to visualize that.

You can find it's github project here.

It's easy to install sudo $ES_HOME/bin/plugin -i mobz/elasticsearch-head and then you can open localhost:9200/_plugin/head/ in your web brower.

You should have something that looks like this :

enter image description here

Syriac answered 9/12, 2014 at 10:46 Comment(3)
Error (52) Empty reply from server when I try to execute above commandMarlomarlon
@AzharUddinSheikh you probably have some kind of security check on the cluste. Maybe it's protected with some kind of key or certificate...Syriac
@AzharUddinSheikh This kind of replay also can happen when you do a http request to https endpoint. So try to use https protocol in your address field.Blinding
B
63

You can check elasticsearch cluster health by using (CURL) and Cluster API provieded by elasticsearch:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'

This will give you the status and other related data you need.

{
 "cluster_name" : "xxxxxxxx",
 "status" : "green",
 "timed_out" : false,
 "number_of_nodes" : 2,
 "number_of_data_nodes" : 2,
 "active_primary_shards" : 15,
 "active_shards" : 12,
 "relocating_shards" : 0,
 "initializing_shards" : 0,
 "unassigned_shards" : 0,
 "delayed_unassigned_shards" : 0,
 "number_of_pending_tasks" : 0,
 "number_of_in_flight_fetch" : 0
}
Bodoni answered 22/12, 2015 at 7:28 Comment(0)
O
17

The _cluster/health API can do far more than the typical output that most see with it:

 $ curl -XGET 'localhost:9200/_cluster/health?pretty'

Most APIs within Elasticsearch can take a variety of arguments to augment their output. This applies to Cluster Health API as well.

Examples

all the indices health
$ curl -XGET 'localhost:9200/_cluster/health?level=indices&pretty' | head -50
{
  "cluster_name" : "rdu-es-01",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 9,
  "number_of_data_nodes" : 6,
  "active_primary_shards" : 1106,
  "active_shards" : 2213,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0,
  "indices" : {
    "filebeat-6.5.1-2019.06.10" : {
      "status" : "green",
      "number_of_shards" : 3,
      "number_of_replicas" : 1,
      "active_primary_shards" : 3,
      "active_shards" : 6,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    "filebeat-6.5.1-2019.06.11" : {
      "status" : "green",
      "number_of_shards" : 3,
      "number_of_replicas" : 1,
      "active_primary_shards" : 3,
      "active_shards" : 6,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    "filebeat-6.5.1-2019.06.12" : {
      "status" : "green",
      "number_of_shards" : 3,
      "number_of_replicas" : 1,
      "active_primary_shards" : 3,
      "active_shards" : 6,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    "filebeat-6.5.1-2019.06.13" : {
      "status" : "green",
      "number_of_shards" : 3,
all shards health
$ curl -XGET 'localhost:9200/_cluster/health?level=shards&pretty' | head -50
{
  "cluster_name" : "rdu-es-01",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 9,
  "number_of_data_nodes" : 6,
  "active_primary_shards" : 1106,
  "active_shards" : 2213,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0,
  "indices" : {
    "filebeat-6.5.1-2019.06.10" : {
      "status" : "green",
      "number_of_shards" : 3,
      "number_of_replicas" : 1,
      "active_primary_shards" : 3,
      "active_shards" : 6,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "shards" : {
        "0" : {
          "status" : "green",
          "primary_active" : true,
          "active_shards" : 2,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 0
        },
        "1" : {
          "status" : "green",
          "primary_active" : true,
          "active_shards" : 2,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 0
        },
        "2" : {
          "status" : "green",
          "primary_active" : true,
          "active_shards" : 2,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 0

The API also has a variety of wait_* options where it'll wait for various state changes before returning immediately or after some specified timeout.

Offoffbroadway answered 17/6, 2019 at 12:26 Comment(0)
M
3

If Elasticsearch cluster is not accessible (e.g. behind firewall), but Kibana is:

Kibana => DevTools => Console:

GET /_cluster/health 

enter image description here enter image description here

Matte answered 18/6, 2019 at 7:44 Comment(3)
This works for sure but how does Kibana connect to ES automatically?Pegeen
@HarishNarayanan, it is not automatic - it is either by default or via configuration. Kibana simply appends those GET /_cluster/health requests from UI to base URL (e.g. http://localhost:9200) it uses to connect to Elasticsearch. The point to use Kibana is that you may not even have network access to Elasticsearch (e.g. not exposed from cluster) - you still can do these queries as long as you can access Kibana.Matte
Got it @uvsmtid. I reviewed the config folder and related yml files. Got the required info. Many thanksPegeen
S
2

In case you have authentication and ssl/tls enabled, use the following command on the node where Elasticsearch is running:

curl -XGET --insecure --user elastic:12345678 \
    'https://localhost:9200/_cluster/health?pretty'

Where elastic is the username, 12345678 is the password (will be different in your case), https://localhost:9200 is the hostname with https protocol because of ssl/tls and --insecure is used to access it without using ssl/tls so you don't have to mention the SSL certificates.

Skyway answered 9/10, 2023 at 11:32 Comment(0)
R
0

PROBLEM :-

Sometimes, Localhost may not get resolved. So it tends to return an output as seen below :

# curl -XGET localhost:9200/_cluster/health?pretty

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta http-equiv="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!--BODY{background-color:#ffffff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--></style>
</head><body>
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
<hr>
<p>The following error was encountered while trying to retrieve the URL: <a href="http://localhost:9200/_cluster/health?">http://localhost:9200/_cluster/health?</a></p>
<blockquote>
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>

<p>The system returned: <i>(111) Connection refused</i></p>

<p>The remote host or network may be down.  Please try the request again.</p>
<p>Your cache administrator is <a href="mailto:root?subject=CacheErrorInfo%20-%20ERR_CONNECT_FAIL&amp;body=CacheHost%3A%20squid2%0D%0AErrPage%3A%20ERR_CONNECT_FAIL%0D%0AErr%3A%20(111)%20Connection%20refused%0D%0ATimeStamp%3A%20Mon,%2017%20Dec%202018%2008%3A07%3A36%20GMT%0D%0A%0D%0AClientIP%3A%20192.168.13.14%0D%0AServerIP%3A%20127.0.0.1%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2F_cluster%2Fhealth%3Fpretty%20HTTP%2F1.1%0AUser-Agent%3A%20curl%2F7.29.0%0D%0AHost%3A%20localhost%3A9200%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0A%0D%0A%0D%0A">root</a>.</p>

<br>   
<hr> 
<div id="footer">Generated Mon, 17 Dec 2018 08:07:36 GMT by squid2 (squid/3.0.STABLE25)</div>
</body></html>

# curl -XGET localhost:9200/_cat/indices

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta http-equiv="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!--BODY{background-color:#ffffff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--></style>
</head><body>
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
<hr>
<p>The following error was encountered while trying to retrieve the URL: <a href="http://localhost:9200/_cat/indices">http://localhost:9200/_cat/indices</a></p>
<blockquote>
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>

<p>The system returned: <i>(111) Connection refused</i></p>

<p>The remote host or network may be down.  Please try the request again.</p>
<p>Your cache administrator is <a href="mailto:root?subject=CacheErrorInfo%20-%20ERR_CONNECT_FAIL&amp;body=CacheHost%3A%20squid2%0D%0AErrPage%3A%20ERR_CONNECT_FAIL%0D%0AErr%3A%20(111)%20Connection%20refused%0D%0ATimeStamp%3A%20Mon,%2017%20Dec%202018%2008%3A10%3A09%20GMT%0D%0A%0D%0AClientIP%3A%20192.168.13.14%0D%0AServerIP%3A%20127.0.0.1%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2F_cat%2Findices%20HTTP%2F1.1%0AUser-Agent%3A%20curl%2F7.29.0%0D%0AHost%3A%20localhost%3A9200%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0A%0D%0A%0D%0A">root</a>.</p>

<br>   
<hr> 
<div id="footer">Generated Mon, 17 Dec 2018 08:10:09 GMT by squid2 (squid/3.0.STABLE25)</div>
</body></html>

SOLUTION :-

Guess, this error is most probably returned by Local Squid deployed in the server.

So, it worked fine and good after replacing localhost by the local_ip in which the ElasticSearch has been deployed.

Rento answered 17/12, 2018 at 9:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.