datastax-enterprise Questions

2

I have a table as below CREATE TABLE test ( day int, id varchar, start int, action varchar, PRIMARY KEY((day),start,id) ); I want to run this query Select * from test where day=1 and sta...
Palmapalmaceous asked 3/3, 2017 at 10:18

1

Solved

Intro I am following the Django tutorial. In contrast to it I have two databases - MySql and Cassandra. Therefore, I need to use also the Cassandra models which contain the UUID types. Thee UUID h...
Godding asked 4/2, 2017 at 23:41

6

Solved

I am trying to use Spark Cassandra Connector in Spark 1.1.0. I have successfully built the jar file from the master branch on GitHub and have gotten the included demos to work. However, when I try...
Snakeroot asked 14/9, 2014 at 19:57

2

Solved

How would one go about implementing a skip/take query (typical server side grid paging) using Spark SQL. I have scoured the net and can only find very basic examples such as these here: https://dat...
Brazilein asked 15/5, 2015 at 12:56

2

Solved

What happens when all seed nodes in Cassandra are down? Can new nodes join the cluster at that point ?
Hedjaz asked 9/11, 2016 at 14:47

2

Solved

The exact Exception is as follows com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal] These are the version...

2

Solved

In cassandra I have a list column type. I am new to spark and scala, and have no idea where to start. In spark I want get count of each values, is it possible to do so. Below is the dataframe +---...

3

I have a doubt when i read datastax documentation about cassandra write consistency. I have a question on how cassandra will maintain consistent state on following scenario: Write consistency lev...
Crybaby asked 13/2, 2014 at 7:15

3

Currently we're using cassandra with cassandra-driver, doing standard queries and accessing "raw" results in our DAL and BL layers. Our application should support millions of users, each request in...
Recite asked 9/2, 2016 at 11:31

1

Solved

I have a tens of millions of rows of data. Is it possible to analyze all of these within a week or a day using spark streaming? What's the limit to spark streaming in terms of data amount? I am not...
Breadstuff asked 29/2, 2016 at 2:51

1

Solved

Setup: We have 3 nodes Cassandra cluster having data of around 850G on each node, we have LVM setup for Cassandra data directory (currently consisting 3 drives 800G + 100G + 100G) and have separate...

1

Solved

My code directly executes the bound statement prepared without any exact query. Then how to get the cql it is trying to perform in cassandra database? For example: public <T> void save(T en...

1

I have scenario where i will be receiving streaming data which is processed by my spark streaming program and the output for each interval is being appended to my existing cassandra table. Curren...

2

Solved

cqlsh doesn't allow nested queries so I cant export selected data to csv.. I'm trying to export the selected data (about 200,000 rows with a single column) from cassandra using: echo "SELECT disti...
Hydrocellulose asked 24/12, 2015 at 15:10

1

Solved

if someone has some experience in using UDT (User Defined Types), I would like to understand how the backward compatibility would work. Say I have the following UDT CREATE TYPE addr ( street1 t...

1

Solved

I tried reading up on datastax blogs and documentation but could not find any specific on this Is there a way to keep 2 tables in Cassandra to belong to same partition? For example: CREATE TYPE a...

1

Solved

I'm trying to filter on a small part of a huge C* table by using: val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_)).joinWithCassandraTable("listener","snapshots_tspark...
Sikata asked 25/10, 2015 at 12:8

1

Solved

I'm trying to filter on a small part of a huge Cassandra table by using: val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_2)).joinWithCassandraTable("listener","snapshot...
Trautman asked 2/8, 2015 at 15:34

1

Tried using both Spark shell and Spark submit, getting this exception? Initializing SparkContext with MASTER: spark://1.2.3.4:7077 ERROR 2015-06-11 14:08:29 org.apache.spark.scheduler.cluster.Sp...

1

I have an old Cassandra cluster that needs to be brought back into life. I would like to clear out all the user and system data, all stored tokens, everything and start from a clean slate - is ther...
Marable asked 13/7, 2015 at 7:40

0

We have a cluster of 5 node. Since datastax community edition doesn't offer reliable technical support, we are planning to purchase datastax enterprise. I would like to know cost of the datastax e...
Munt asked 8/7, 2015 at 9:50

1

Solved

I have table with 5 columns. 1. ID - number but it can stored as text or number 2. name - text 3. date - date value but can stored as date or text 4. time - number but it can stored as text or...
Darcidarcia asked 20/6, 2015 at 19:26

1

Solved

When I try to Insert data in Cassandra using the below query I am getting the below mentioned error cqlsh:assign> insert into tblFiles1(rec_no,clientid,contenttype,datafiles,filename) values(1...
Pessimism asked 1/6, 2015 at 13:40

1

I've seen an issue that happens fairly often when bootstrapping new nodes to a Datastax Enterprise Cassandra cluster (ver: 2.0.10.71) When starting the new node to be bootstrapped, the bootstrap pr...
Primordium asked 27/4, 2015 at 20:36

1

Solved

When I issue $ nodetool compactionhistory I get . . . compacted_at bytes_in bytes_out rows_merged . . . 1404936947592 8096 7211 {1:3, 3:1} What does {1:3, 3:1} mean? The only documentation I ...
Amblyopia asked 19/12, 2014 at 12:49

© 2022 - 2024 — McMap. All rights reserved.