How do atomic batches work in Cassandra?
Asked Answered
S

3

26

How can atomic batches guarantee that either all statements in a single batch will be executed or none?

Shelled answered 5/2, 2015 at 16:10 Comment(0)
S
47

In order to understand how batches work under the hood, its helpful to look at the individual stages of the batch execution.

The client

Batches are supported using CQL3 or modern Cassandra client APIs. In each case you'll be able to specify a list of statements you want to execute as part of the batch, a consistency level to be used for all statements and an optional timestamp. You'll be able to batch execute INSERT, DELETE and UPDATE statements. If you choose not to provide a timestamp, the current time is automatically used and associated with the batch.

The client will have to handle two exception in case the batch could not be executed successfully.

  • UnavailableException - there are not enough nodes alive to fulfill any of the updates with specified batch CL
  • WriteTimeoutException - timeout while either writing the batchlog or applying any of the updates within the batch. This can be checked by reading the writeType value of the exception (either BATCH_LOG or BATCH).

Failed writes during the batchlog stage will be retried once automatically by the DefaultRetryPolicy in the Java driver. Batchlog creation is critical to ensure that a batch will always be completed in case the coordinator fails mid-operation. Read on for finding out why.

The coordinator

All batches send by the client will be executed by the coordinator just as with any write operation. Whats different from normal write operations is that Cassandra will also make use of a dedicated log that will contain all pending batches currently executed (called the batchlog). This log will be stored in the local system keyspace and is managed by each node individually. Each batch execution starts by creating a log entry with the complete batch on preferably two nodes other than the coordinator. After the coordinator was able to create the batchlog on the other nodes, it will start to execute the actual statements in the batch.

Each statement in the batch will be written to the replicas using the CL and timestamp of the whole batch. Beside from that, there's nothing special about writes happening at this point. Writes may also be hinted or throw a WriteTimeoutException, which can be handled by the client (see above).

After the batch has been executed, all created batchlogs can be safely removed. Therefor the coordinator will send a batchlog delete message upon successfull execution to the nodes that have received the batchlog before. This happens in the background and will go unnoticed in case it fails.

Lets wrap up what the coordinator does during batch execution:

  • sends batchlog to two other nodes (preferably in different racks)
  • execute all statements in batch
  • deletes batchlog from nodes again after successful batch execution

The batchlog replica nodes

As described above, the batchlog will be replicated across two other nodes (if the cluster size allows it) before batch execution. The idea is that any of these nodes will be able to pick up pending batches in case the coordinator will go down before finishing all statements in the batch.

What makes thinks a bit complicated is the fact that those nodes won't notice that the coordinator is not alive anymore. The only point at which the batchlog nodes will be updated with the current status of the batch execution, is when the coordinator is issuing a delete messages indicating the batch has been successfully executed. In case such a message doesn't arrive, the batchlog nodes will assume the batch hasn't been executed for some reasons and replay the batch from the log.

Batchlog replay is taking place potentially every minute, ie. that is the interval a node will check if there are any pending batches in the local batchlog that haven't been deleted by the -possibly killed- coordinator. To give the coordinator some time between the batchlog creation and the actual execution, a fixed grace period is used (write_request_timeout_in_ms * 2, default 4 sec). In case that the batchlog still exists after 4 sec, it will be replayed.

Just as with any write operation in Cassandra, timeouts may occur. In this case the node will fall back writing hints for the timed out operations. When timed out replicas will be up again, writes can resume from hints. This behavior doesn't seem to be effected whether hinted_handoff_enabled is enabled or not. There's also a TTL value associated with the hint which will cause the hint to be discarded after a longer period of time (smallest GCGraceSeconds for any involved CF).

Now you might be wondering if it isn't potentially dangerous to replay a batch on two nodes at the same time, which may happen has we replicate the batchlog on two nodes. Whats important to keep in mind here is that each batch execution will be idempotent due to the limited kind of supported operations (updates and deletes) and the fixed timestamp associated to the batch. There won't be any conflicts even if both nodes and the coordinator will retry executing the batch at the same time.

Atomicity guarantees

Lets get back to the atomicity aspects of "atomic batches" and review what exactly is meant with atomic (source):

"(Note that we mean “atomic” in the database sense that if any part of the batch succeeds, all of it will. No other guarantees are implied; in particular, there is no isolation; other clients will be able to read the first updated rows from the batch, while others are in progress."

So in a sense we get "all or nothing" guarantees. In most cases the coordinator will just write all the statements in the batch to the cluster. However, in case of a write timeout, we must check at which point the timeout occurred by reading the writeType value. The batch must have been written to the batchlog in order to be sure that those guarantees still apply. Also at this point other clients may also read partially executed results from the batch.

Getting back to the question, how can Cassandra guarantee that either all or no statements at all in a batch will be executed? Atomic batches basically depend on successful replication and idempotent statements. It's not a 100% guaranteed solution as in theory there might be scenarios that will still cause inconsistencies. But for a lot of use cases in Cassandra its a very useful tool if you're aware how it works.

Shelled answered 5/2, 2015 at 16:10 Comment(7)
Can I configure the number of replicas of batch-logs that should be written by the coordinator? Or is it hard-coded as two as mentioned in your answer?Areaway
is it a good idea to use a batched log if I have 100 inserts with same partition key?I want to undo the entire process even if a single insert fails.Interlinear
@JayeshJain all inserts for a single partition key will be automatically merged into a single statement and executed atomically, so you should be fineShelled
Thx. I verfied it. Atomicity was achieved.Interlinear
@StefanPodkowinski, do you know which node receives the batch? I think it's based on the first statement in the batch, but i could be wrong. thanks!Pokeberry
Great answer, but it should highlight that it only refers to multi partition batches. Single partition batches are a completely different story, what I'm trying to point out in this post: inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batchThibodeaux
Let's say a client initiates a batch request, then the coordinator sends batchlogs to a few peers, then the coordinator fails. The batch will still be carried out by peers, as you say. Then, how does the client receive confirmation that the batch succeeded or failed?Succursal
E
1

Batch documentation (doc) :

In Cassandra 1.2 and later, batches are atomic by default. In the context of a Cassandra batch operation, atomic means that if any of the batch succeeds, all of it will. To achieve atomicity, Cassandra first writes the serialized batch to the batchlog system table that consumes the serialized batch as blob data. When the rows in the batch have been successfully written and persisted (or hinted) the batchlog data is removed. There is a performance penalty for atomicity. If you do not want to incur this penalty, prevent Cassandra from writing to the batchlog system by using the UNLOGGED option: BEGIN UNLOGGED BATCH

Extravert answered 5/2, 2015 at 17:14 Comment(0)
M
1

Cassandra batches:-

http://docs.datastax.com/en/cql/3.1/cql/cql_reference/batch_r.html

To add to above answers:- With Cassandra 2.0, you can write batch statements + LWT. The restriction though is that all DMLs must be on same partition

Microbiology answered 31/8, 2015 at 8:11 Comment(1)
you might already know that StackOverflow Nettiquette encourages users to post answers with explanation of key ideas ( as opposed to just-a-URL link only derrogation ). You might want to update your post so as to really contain the answer and leaving URL-link for further reference to the source, where your answer sourced inspiration from. **Anyway, enjoy becoming an active contributing member of this great merit-focused community **Gainey

© 2022 - 2024 — McMap. All rights reserved.