Elasticsearch uses just half of cpu cores even under full load
Asked Answered
B

1

7

I have installed Elasticsearch on my server (Windows Server 2012). When I run several query request as an stress test, just half of CPU cores are utilized. Why?

CPU cores utilization - Elasticsearch under stress test

Burgess answered 21/1, 2015 at 10:52 Comment(6)
Is this a virtual machine?Nahuatlan
Yes it is. So what is wrong with virtual machines?Burgess
When you say it is using half of CPU cores, do you mean half of the cores for the VM or for the host machine?Nahuatlan
give more specification about the test you are runningEvers
I have not changed any configuration in the yaml config file except setting the cluster name and number of shards.Burgess
Use cat thread_pool API to see how many threads are running, if any queue or rejection.Saltwater
U
2

Are you using the default elasticsearch configuration?

Make sure you don't limit the number of threads used for search/bulk/index. The defaults are well optimised, no need to change them. The default number of threads (except for search) is set to the number of cores on your machine. For search it's (number of cores * 3).

An example for this configuration (that you should avoid) for search in elasticsearch.yml file:

threadpool.search.type: fixed
threadpool.search.size: <num-of-threads>

Also, make sure to follow the deployment guidelines to optimise performance.

Umlaut answered 25/2, 2015 at 8:1 Comment(6)
Thanks for your contributions Rafi, but there is not any extra configs in my elasticsearch.yaml file. The only considerable thing I can add is that the elasticsearch application is hosted on a virtual windows server 2012 r2.Burgess
@HosseinBakhtiari Maybe it is too late, but how many shards do you have? From my personal experience searches in shard is taking just one core. So if your shard number is half of the number of cores, you will see only 50% utilization.Marlanamarlane
@Marlanamarlane is there a way to force it to use all cores? i got only 1 shard which is completely ok for a medium eshop (~30k products, <1GB of data)Large
@Large is your searches too slow? Do you really need to use more cores? Do you have concurrent searches going? If so, think about using replication factor more aggressively. Like start ES multiple times on the same node and treat it like a cluster. ES will round-robin searches and you will get better use of your resources.Marlanamarlane
@Marlanamarlane thats not the case. i aint got much qps, i only got slow queries taking several seconds due to many aggregations. splitting into multiple shards/nodes has no effect since all shards must compute everything and theres no work divided im afraid (as far as i know and my testings show). for now, im gonna just relax the query and have no aggregations at all, which is a big effect in result for me unfortuantellyLarge
@ulkas, I wonder what will happen if you have more than one shard. My perf testing showed that for filtered queries more shards are actually better. I imagine aggregations should give the same result. If you can reindex to 8 shards(number of cores) and check the perf will be interesting.Marlanamarlane

© 2022 - 2024 — McMap. All rights reserved.