We have a modest clickhouse cluster, ~30 nodes, and want to collect usage stats on it. We are hoping to do this using scheduled queries against the system tables, but using a normal query only get information on the one node you happen to be connected to, and creating a distributed table only works with the *log system tables. We can loop over the nodes, but don't want to do that. Is there a way to get all the instances of a system table, such as system.parts, in one query?
is there a better way to query system tables across a clickhouse cluster?
Asked Answered
Distributed tables works with any type of tables and clusterAllReplicas as well.
create table test on cluster replicated as system.processes Engine=Distributed(replicated, system, processes);
SELECT
FQDN(),
elapsed
FROM test
┌─FQDN()────────────────────┬────elapsed─┐
│ hos.mycmdb.net │ 0.00063795 │
└───────────────────────────┴────────────┘
SELECT
FQDN(),
elapsed
FROM clusterAllReplicas(replicated, system, sessions);
SELECT elapsed
FROM clusterAllReplicas(replicated, system, processes)
┌─────elapsed─┐
│ 0.005636027 │
└─────────────┘
┌─────elapsed─┐
│ 0.000228303 │
└─────────────┘
┌─────elapsed─┐
│ 0.000275745 │
└─────────────┘
┌─────elapsed─┐
│ 0.000311621 │
└─────────────┘
┌─────elapsed─┐
│ 0.000270791 │
└─────────────┘
┌─────elapsed─┐
│ 0.000288045 │
└─────────────┘
┌─────elapsed─┐
│ 0.001048277 │
└─────────────┘
┌─────elapsed─┐
│ 0.000256203 │
└─────────────┘
When we tried this solution, we started getting errors like this one: <Error> default.parts.DirectoryMonitor: Code: 48, e.displayText() = DB::Exception: Received from <redacted>. DB::Exception: Method write is not supported by storage SystemParts. Any thoughts on what we might be doing wrong? –
Implicate
@RichardRymer you cannot create Mat.View witch inserts or reads system.parts. Seems you can achieve your goal simply by enabling system.part_log –
Suicidal
It can be used remote or remoteSecure functions that support multiple addresses:
SELECT
hostName() AS host,
any(partition),
count()
FROM remote('node{01..30}-west.contoso.com', system, parts)
GROUP BY host
/*
┌─host──────────┬─any(partition)─┬─count()─┐
│ node01-west │ 202012 │ 733 │
..
│ node30-west │ 202012 │ 687 │
└───────────────┴────────────────┴─────────┘
*/
Interesting. I will have to experiment with this to see if it works on Superset. –
Implicate
Very cool. I did not know about this. Thank you for sharing. –
Implicate
@RichardRymer np, I glad to help ;) –
Rebus
So this ended up being the most stable solution for us, though it involves a little more maintenance for scheduled queries. –
Implicate
For the record, we ended up using materialized views:
CREATE MATERIALIZED VIEW _tmp.parts on cluster main_cluster
engine = Distributed('main_cluster', 'system', 'parts', rand())
AS select * from system.parts
Though this worked, we are pretty sure it caused a bunch of performance issues, namely insert latency went through the roof. –
Implicate
© 2022 - 2024 — McMap. All rights reserved.