Ways to implement data versioning in Cassandra
Asked Answered
K

2

20

Can you share your thoughts how would you implement data versioning in Cassandra.

Suppose that I need to version records in an simple address book. (Address book records are stored as Rows in a ColumnFamily). I expect that the history:

  • will be used infrequently
  • will be used all at once to present it in a "time machine" fashion
  • there won't be more versions than few hundred to a single record.
  • history won't expire.

I'm considering the following approach:

  • Convert the address book to Super Column Family and store multiple version of address book records in one Row keyed (by time stamp) as super columns.

  • Create new Super Column Family to store old records or changes to the records. Such structure would look as follows:

    { 'address book row key': { 'time stamp1': { 'first name': 'new name', 'modified by': 'user id', },

    'time stamp2': {
            'first name': 'new name',
            'modified by': 'user id',
        },
    },
    

    'another address book row key': { 'time stamp': { ....

  • Store versions as serialized (JSON) object attached in new ColumnFamilly. Representing sets of version as rows and versions as columns. (modelled after Simple Document Versioning with CouchDB)

Krasnoyarsk answered 15/11, 2010 at 11:37 Comment(0)
S
9

If you can add the assumption that address books typically have fewer than 10,000 entries in them, then using one row per address book time line in a super column family would be a decent approach.

A row would look like:

{'address_book_18f3a8':
  {1290635938721704: {'entry1': 'entry1_stuff', 'entry2': 'entry2_stuff'}},
  {1290636018401680: {'entry1': 'entry1_stuff_v2', ...},
  ...
}

where the row key identifies the address book, each super column name is a time stamp, and the subcolumns represent the address book's contents for that version.

This would allow you to read the latest version of an address book with only one query and also write a new version with a single insert.

The reason I suggest using this if address books are less than 10,000 elements is that super columns must be completely deserialized when you read even a single subcolumn. Overall, not that bad in this case, but it's something to keep in mind.

An alternative approach would be to use a single row per version of the address book, and use a separate CF with a time line row per address book like:

{'address_book_18f3a8': {1290635938721704: some_uuid1, 1290636018401680: some_uuid2...}}

Here, some_uuid1 and some_uuid2 correspond to the row key for those versions of the address book. The downside to this approach is that it requires two queries every time the address book is read. The upside is that it lets you efficiently read only select parts of an address book.

Snailpaced answered 24/11, 2010 at 22:9 Comment(1)
thank you for pointing out that you always need to read the whole supercolumn. I haven't spot that fact reading the cassandra docs.Krasnoyarsk
P
0

HBase(http://hbase.apache.org/) has this functionality built in. Give it a try.

Pentheam answered 18/3, 2013 at 12:59 Comment(1)
Are you're referring to "Versions" in hbase(hbase.apache.org/book/versions.html)? It would be helpful to link to the actual documentation for the feature to which you're referring.Whew

© 2022 - 2024 — McMap. All rights reserved.