LeveledCompactionStrategy : what is the impact of tuning the sstable_size_in_mb?
Asked Answered
S

1

7

To enhance read performance, I try to have fewer underlying SSTables with LCS, so I set sstable_size_in_mb to 1280MB as suggested by some articles, which pointed out that the 160MB default value was picked out by Cassandra core team a long time ago, on a pretty old server by now with only 2GB RAM. However, my concern is about implications of having a higher value of sstable_size_in_mb.

What I understand is LCS regularly compact all the SSTables in L0 together with all the SSTables in L1, then replacing the entire content of L1. So each time L1 is replaced, the hardware requirements CPU/RAM and write amplification may be higher with a higher value of sstable_size_in_mb. Indeed, if sstable_size_in_mb = 1280MB, so 10 tables of 1280MB in L1 have to be merged each time with all L0 tables. And maybe there are also implications on a higher level, even if the SSTables to replace seems lower (one L1 SSTables is merged with 10 L2 SSTables, then those 10 L2 SSTables are replaced).

Questions :

  1. Having a higher value of sstable_size_in_mb may increase read performance by lowering the number of SSTables involved in a CQL Table. However, what are the others implications to have such higher value (like 1280MB) for sstable_size_in_mb?

  2. If higher value, are there any corresponding configuration to tune (Garbage Collector, chunk cache, ...) to allow better performance for compactions of those larger SSTables, and having less GC activity?

  3. More subjective question, what is the typical value of sstable_size_in_mb you use in your deployment?

Savagery answered 15/5, 2019 at 9:40 Comment(0)
S
2

To answer your first question, I'd like to quote some original text from Jonathan Ellis in CASSANDRA-5727 when the community initially looked into the sstable_size_in_mb (and subsequently decided on the 160 number).

"larger files mean that each level contains more data, so reads will have to touch less sstables, but we're also compacting less unchanged data when we merge forward." (Note: I suspect there was a typo and he meant "we're also compacting more unchanged data when we merge forward", which aligns with what you stated in your second paragraph, and what he meant by larger file impacting "compaction efficiency".)

As for any other implication: it might push the envelope on the LCS node density upper bound, as it would allow much higher density for the same number of SSTables per node.

To answer your second question, compaction does create a lot of churns in the heap, as it creates many short lived objects from SSTables. Given much bigger SSTables involved in the compaction when you use the 1280MB size, you should pay attention to your gc.log and watch out for "Humongous Allocation" messages (if you use G1GC). If they turn out to happen a lot, you can increase the region size to avoid costly collections of humongous objects by using the -XX:G1HeapRegionSize option.

For your third question, as far as I know, many have used the 160MB default value for a long time, as we don't have a comprehensive analysis published on the impact/benefit from benchmarking larger SSTable size with modern hardware yet (I attempted to run some quick tests, but got busy with other things and didn't finish that effort, sorry). However, I do think if people are interested in achieving higher node density with LCS, this SSTable size is a parameter that's worth exploring.

Surplus answered 19/11, 2021 at 7:19 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.