Mutation of 17076203 bytes is too large for the maxiumum size of 16777216
Asked Answered
T

2

5

I have "commitlog_segment_size_in_mb: 32" in the cassandra settings but the error below indicates maximum size is 16777216, which is about 16mb. Am I looking at the correct setting for fixing the error below?

I am referring to this setting based on the suggestion provided at http://mail-archives.apache.org/mod_mbox/cassandra-user/201406.mbox/%[email protected]%3E

I am using 2.1.0-2 for Cassandra.

I am using Kairosdb, and the write buffer max size is 0.5Mb.

WARN  [SharedPool-Worker-1] 2014-10-22 17:31:03,163 AbstractTracingAwareExecutorService.java:167 - Uncaught exception on thread Thread[SharedPool-Worker-1,5,main]: {}
java.lang.IllegalArgumentException: Mutation of 17076203 bytes is too large for the maxiumum size of 16777216
        at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:216) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:203) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:371) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:54) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_67]
        at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) [apache-cassandra-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
Treacle answered 22/10, 2014 at 17:45 Comment(1)
You may want to consider breaking up your writes into smaller chunks if you are looking for super fast latencies.Meilen
M
9

You are looking at the correct parameter in your .yaml. The maximum write size C* will allow is half of the commit_log_segment_size_in_mb, default is 32mb so the default max size will be 16mb.

Background

commit_log_segment_size_in_mb represents your block size for commit log archiving or point-in-time backup. These are only active if you have configured archive_command or restore_command in your commitlog_archiving.properties file.

Meilen answered 22/10, 2014 at 18:32 Comment(5)
I am using kairosdb, which writes to cassandra. As per kairosdb properties, the max write buffer size is 0.5Mb.Treacle
so you're saying your writes aren't actually hitting the 16mb?Meilen
I think so. commitlog_sync_period_in_ms is 10s, so could it be that the batch size is exceeding 16mb?Treacle
Pls ignore my earlier comment. commitlog_sync is periodic, not batch.Treacle
In 2.2.4 the name is bit different: "commitlog_segment_size_in_mb".Switchback
A
3

Its the correct setting.. This means Cassandra will discard this write as it exceeds 50% of the configured commit log segment size. So set the parameter commitlog_segment_size_in_mb: 64 in Cassandra.yaml of each node in cluster and restart each node to take effect the changes.

Cause: By design intent the maximum allowed segment size is 50% of the configured commit_log_segment_size_in_mb. This is so Cassandra avoids writing segments with large amounts of empty space.

To elaborate; up to two 32MB segments will fit into 64MB, however 40MB will only fit once leaving a larger amount of unused space.

reference link from datastax:

https://support.datastax.com/hc/en-us/articles/207267063-Mutation-of-x-bytes-is-too-large-for-the-maxiumum-size-of-y-

Agro answered 24/6, 2016 at 7:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.