Is Clickhouse Buffer Table appropriate for realtime ingestion of many small inserts?
Asked Answered
C

1

5

I am writing an application that plots financial data and interacts with a realtime feed of such data. Due to the nature of the task, live market data may be received very frequently in one-trade-at-a-time fashion. I am using the database locally and I am the only user. There is only one program (my middleware) that will be inserting data to the db. My primary concern is latency - I want to minimize it as much as possible. For that reason, I would like to avoid having a queue (in a sense, I want the Buffer Table to fulfill that role). A lot of the analytics Clickhouse calculates for me are expected to be realtime (as much as possible) as well. I have three questions:

  1. Clarify some limitations/caveats from the Buffer Table documentation
  2. Clarify how querying works (regular queries + materialized views)
  3. What happens when I query the db when data is being flushed

Question 1) Clarify some limitations/caveats from the Buffer Table documentation

Based on Clickhouse documentation, I understand that many small INSERTs are sub-optimal to say the least. While researching the topic I found that the Buffer Engine [1] could be used as a solution. It made sense to me, however when I read Buffer's documentation I found some caveats:

Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”).

A few thousand rows per second is absolutely fine for me, however I am concerned about other performance considerations - if I do commit data to the buffer table one row at a time, should I expect spikes in CPU/memory? If I understand correctly, committing one row at a time to a MergeTree table would cause a lot of additional work for the merging job, but it should not be a problem if Buffer Table is used, correct?

If the server is restarted abnormally, the data in the buffer is lost.

I understand that this refers to things like power outage or computer crashing. If I shutdown the computer normally or stop the clickhouse server normally, can I expect the buffer to flush data to the target table?

Question 2) Clarify how querying works (regular queries + materialized views)

When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one). Note that the Buffer tables does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.)

Does that mean I can use queries against the target table and expect Buffer Table data to be included automatically? Or is it the other way around - I query the buffer table and the target table is included in the background? If either is true (and I don't need to aggregate both tables manually), does that also mean Materialized Views would be populated? Which table should trigger the materialized view - the on-disk table or the buffer table? Or both, in some way?

I rely on Materialized Views a lot and need them updated in realtime (or as close as possible). What would be the best strategy to accomplish that goal?

Question 3) What happens when I query the db when data is being flushed?

My two main concerns here are with regards to:

  1. Running a query at the exact time flushing occurs - is there a risk of duplicated records or omitted records?
  2. At which point are Materialized Views of the target table populated (I suppose it depends on whether it's the target table or the buffer table that triggers the MV)? Is flushing the buffer important in how I structure the MV?

Thank you for your time.

[1] https://clickhouse.tech/docs/en/engines/table-engines/special/buffer/

Chiefly answered 11/9, 2021 at 22:2 Comment(0)
D
6

A few thousand rows per second is absolutely fine for me, however I am concerned about other performance considerations - if I do commit data to the buffer table one row at a time, should I expect spikes in CPU/memory?

No Buffer tables engine don't produce CPU\Memory spikes

If I understand correctly, committing one row at a time to a MergeTree table would cause a lot of additional work for the merging job, but it should not be a problem if Buffer Table is used, correct?

Buffer table engine is works as memory buffer which periodically flushed the batch of rows to underlying *MergeTree table, parameters of Buffer table is a size and frequency of flushes

If I shutdown the computer normally or stop the clickhouse server normally, can I expect the buffer to flush data to the target table?

Yes, when server stop normally, Buffer tables will flush their data.

I query the buffer table and the target table is included in the background?

Yes, this is right behavior, when you SELECT from Buffer table, SELECT also will pass into underlying *MergeTree table and flushed data will read from *MergeTree

does that also mean Materialized Views would be populated?

It is not clear, do you CREATE MATERIALIZED VIEW as trigger FROM *MergeTree table or trigger FROM the Buffer table, and which Table Engine do you use for TO table clause?

I would suggest CREATE MATERIALIZED VIEW as trigger FROM underlying MergeTree table

Dato answered 25/9, 2021 at 15:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.