Clickhouse Memory Issue
Asked Answered
T

2

5

I am having a table with around 2 billion rows that i try to query the max(id) from. Id is not the sort key of the table and the table is using the table engine mergeTree.

No matter what I try, I get memory errors. This does not stop with this one query only. As soon as I try to query any table fully (vertical) to find data my 12 gb ram is not enough. Now I know I can just add more but that is not the point. Is it by design that clickhouse just throws an error when it doesn't have enough memory? Is there a setting that tells clickhouse to use disk instead?

SQL Error [241]: ClickHouse exception, code: 241, host: XXXXXX, port: 8123; Code: 241, e.displayText() = DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9440624 bytes), maximum: 9.31 GiB (version 21.4.6.55 (official build))

Tourane answered 30/7, 2021 at 13:58 Comment(1)
could you provide the schema of table and sql-query? It looks like you use GROUP BY, so look at GROUP BY in External Memory.Telephotography
T
0

For me what worked was to change the maximum server memory usage from 0.9 to 1.2.

<max_server_memory_usage_to_ram_ratio>1.2</max_server_memory_usage_to_ram_ratio> --> config.xml

Thanks for the reply as it led me ultimately to this.

Tourane answered 10/8, 2021 at 16:15 Comment(0)
S
8

Alexey Milovidov disagree to put into CH documentation minimum RAM requirements. But I would say that 32 GB is a minimum for production CH.

At least:

  • You need to lower mark cache because it's 5GB!!!! by default (set it 500MB).
  • You need to lower max_block_size to 16384.
  • You need to lower max_threads to 2.
  • You need to set max_bytes_before_external_group_by to 3GB.
  • You need to set aggregation_memory_efficient_merge_threads to 1.
Searching answered 31/7, 2021 at 14:32 Comment(6)
Would you elaborate on each point, why? Thanks!Broadtail
so without 32gb CH is not able to proper manage its ram consumtion? So under this assumption, upgrading to 32gb ram would resolve all my issues?Tourane
I would say it's wasting of time, straggling with CH with less than 32GB RAM.Searching
@DennyCrane SQL Error [241]: ClickHouse exception, code: 241, host: XXXXXX, port: 8123; Code: 241, e.displayText() = DB::Exception: Memory limit (total) exceeded: would use 55.84 GiB (attempt to allocate chunk of 4231632 bytes), maximum: 55.83 GiB: (while reading column _version): (while reading from part /var/lib/clickhouse/store/21e/21ede8eb-ba4e-4f11-814c-a216f452cd57/0_919_3015_9/ from mark 0 with max_rows_to_read = 16384): While executing MergeTree (version 21.4.6.55 (official build)) I have added 4x the ram amount and the table size is now only double the size of the ram.Tourane
@ThomasMiller you will shocked but it's kinda expected behavior for CH for your use-case. Let's continue in github.Searching
"You need to lower mark cache because it's 5GB!!!! by default (set it 500MB)." => this solve my problemPeracid
T
0

For me what worked was to change the maximum server memory usage from 0.9 to 1.2.

<max_server_memory_usage_to_ram_ratio>1.2</max_server_memory_usage_to_ram_ratio> --> config.xml

Thanks for the reply as it led me ultimately to this.

Tourane answered 10/8, 2021 at 16:15 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.