Cassandra Timing out because of TTL expiration
Asked Answered
O

1

1

Im using a DataStax Community v 2.1.2-1 (AMI v 2.5) with preinstalled default settings+ increased read time out to 10sec here is the issue

 create table simplenotification_ttl (
               user_id varchar, 
               real_time timestamp,
               insert_time timeuuid,
               read boolean,
               msg varchar, PRIMARY KEY (user_id, real_time, insert_time));

Insert Query:

insert into simplenotification_ttl (user_id, real_time, insert_time, read) 
  values ('test_3',14401440123, now(),false) using TTL 800;

For same 'test_3' I inserted 33,000 tuples. [This problem does not happen for 24,000 tuples]

Gradually i see

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 

 count
-------
 15681

(1 rows)

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 

 count
-------
 12737

(1 rows)

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 
**errors={}, last_host=127.0.0.1**

I have experimented this many times even on different tables. Once this happens, even if i insert with same user_id and do a retrieval with limit 1. It times out.

I require TTL to work properly ie give count 0 after speculated time. How to solve this issue? Thanks

[My other node related setup is using m3.large with 2 nodes EC2Snitch]

Ortega answered 9/12, 2014 at 10:33 Comment(1)
If i query wantedly excluding the deleted region it works. Eg: select * from simplenotification where user_id = 'test_3' and time < 133. it works. But my use case makes is a limit queryOrtega
O
3

You're running into a problem where the number of tombstones (deleted values) is passing a threshold, and then timing out.

You can see this if you turn on tracing and then try your select statement, for example:

cqlsh> tracing on;
cqlsh> select count(*) from test.simple;

 activity                                                                        | timestamp    | source       | source_elapsed
---------------------------------------------------------------------------------+--------------+--------------+----------------
...snip...
 Scanned over 100000 tombstones; query aborted (see tombstone_failure_threshold) | 23:36:59,324 |  172.31.0.85 |         123932
                                                    Scanned 1 rows and matched 1 | 23:36:59,325 |  172.31.0.85 |         124575
                           Timed out; received 0 of 1 responses for range 2 of 4 | 23:37:09,200 | 172.31.13.33 |       10002216

You're kind of running into an anti-pattern for Cassandra where data is stored for just a short time before being deleted. There are a few options for handling this better, including revisiting your data model if needed. Here are some resources:

For your sample problem, I tried lowering the gc_grace_seconds setting to 300 (5 minutes). That causes the tombstones to be cleaned up more frequently than the default 10 days, but that may or not be appropriate based on your application. Read up on the implications of deletes and you can adjust as needed for your application.

Occur answered 9/12, 2014 at 23:57 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.