Cassandra CQLSH OperationTimedOut error=Client request timeout. See Session.execute[_async](timeout)
Asked Answered
P

5

19

I want to transfer data from one Cassandra cluster (reached via 192.168.0.200) to another Cassandra cluster (reached via 127.0.0.1). The data is 523 rows but each row is about 1 MB. I am using the COPY FROM and COPY TO command. I get the following error when I issue the COPY TO command:

Error for (8948428671687021382, 9075041744804640605):
OperationTimedOut - errors={
'192.168.0.200': 'Client request timeout. See Session.execute[_async](timeout)'},
last_host=192.168.0.200 (will try again later attempt 1 of 5).

I tried to change the ~/.cassandra/cqlshrc file to:

[connection]
client_timeout = 5000

But this hasn't helped.

Patronage answered 10/10, 2016 at 10:11 Comment(0)
B
3

It's not clear which version of Cassandra you're using here so I'm going to assume 3.0.x

The COPY function is good but not always the best choice (i.e. if you have a lot of data), however for this though you might want to check some of your timeout settings in cassandra

The docs here show a pagetimeout setting too which may help you.

Moving data between two clusters can be done a number of other ways. You could use of any of the following:

  1. The sstableloader
  2. One of the drivers like the java driver
  3. Using spark to copy data from one cluster to another, like in this example
  4. Using OpsCenter to clone a cluster
  5. The cassandra bulk loader (I've known a number of people to use this)

Of course #3 and #4 need DSE cassandra but its just to give you an idea. I wasn't sure if you were using Apache Cassandra or Datastax Enterprise Cassandra.

Anyway, hope this helps!

Beeswax answered 14/6, 2017 at 9:45 Comment(0)
M
24

You may want to increment the request timeout (default: 10 seconds), not the connect timeout.

Try:

cqlsh --request-timeout=6000

or add:

[connection]
request_timeout = 6000

to your ~/.cassandra/cqlshrc file.

Massy answered 16/11, 2017 at 22:25 Comment(0)
U
4

Regarding the copy timeout the correct way is to use the PAGETIMEOUT parameter as already pointed.

copy keyspace.table to '/dev/null' WITH PAGETIMEOUT=10000;

Trying to set the --request-timeout=6000 with cqlsh does not help in that situation.

Usually answered 5/6, 2018 at 9:20 Comment(0)
B
3

It's not clear which version of Cassandra you're using here so I'm going to assume 3.0.x

The COPY function is good but not always the best choice (i.e. if you have a lot of data), however for this though you might want to check some of your timeout settings in cassandra

The docs here show a pagetimeout setting too which may help you.

Moving data between two clusters can be done a number of other ways. You could use of any of the following:

  1. The sstableloader
  2. One of the drivers like the java driver
  3. Using spark to copy data from one cluster to another, like in this example
  4. Using OpsCenter to clone a cluster
  5. The cassandra bulk loader (I've known a number of people to use this)

Of course #3 and #4 need DSE cassandra but its just to give you an idea. I wasn't sure if you were using Apache Cassandra or Datastax Enterprise Cassandra.

Anyway, hope this helps!

Beeswax answered 14/6, 2017 at 9:45 Comment(0)
H
1

Hi besides the following,

1.Check tombstones
In cassandra tombstones degrade the performance of reads and following issue occur OperationTimedOut: errors={'127.0.0.1': 'Client request timeout. See Session.execute_async'}, last_host=127.0.0.1
Note
When we insert data in to the table with null values in columns it creates a tombstones. we need to avoid null inserts inside the table.
There are multiple options available like unset(https://docs.datastax.com/en/latest-csharp-driver-api/html/T_Cassandra_Unset.htm) and ignoreNulls (https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md) property in spark.
You can check your table status using the following command
nodetool tablestats keyspace1.tablename

2.Remove Tombstones
If your working on a single node you can remove tombstones by altering your table ALTER table keyspace1.tablename WITH gc_grace_seconds = '0';

3.read_request_timeout_in_ms:Configure the value in cassandra.yaml file to increase timeout for a read request

Homeostasis answered 18/6, 2018 at 7:22 Comment(1)
Thank you!! Tombstones was a problem for us indeed! We sorted that out by setting gc_grace_period to something low for certain tables in which rows were getting deleted often/cells being set to null.Patronage
T
0

In case someone needs to increase default timeout in python Cassandra client:

cas_cluster = Cluster(..)
cas_session = cas_cluster.connect(cas_keyspace)

cas_session.default_timeout = 60 # default is 10 sec.

cas_query_resultset = cas_session.execute("..")
Teryl answered 12/9, 2022 at 13:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.