Is it possible to Bulk Insert using Google Cloud Datastore
Asked Answered
F

3

6

We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.

Eventually we would move all our data there, however initially focusing on the archived data as a test.

Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row.

We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time.

Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python?

Any comments, suggestions is appreciated thank you in advanced.

Faucher answered 31/8, 2015 at 16:47 Comment(1)
The google cloud datastore docs cloud.google.com/sql/docs/import-export?hl=en talk abut using mysqldump to upload a complete database - have you tried reading these docs?Galven
S
7

There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.

You could always write a quick script using a local queue that parallelizes the work.

The basic gist would be:

  • Queuing script pulls data out of your MySQL instance and puts it on a queue.
  • (Many) Workers pull from this queue, and try to write the item to Datastore.
  • On failure, push the item back on the queue.

Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.


Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).

If you have properties (which will be indexed by default), this cost goes up substantially.

The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).

A single "new entity write" is equivalent to:

  • + 2 (1 x 2 write ops fixed cost per new entity)
  • + 10 (5 x 2 write ops per indexed property)
  • = 12 "operations" per entity.

So your actual cost to load this data is:

120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00

Systematics answered 27/10, 2015 at 11:53 Comment(1)
can you expand on that? is the absense of bulk-loading a design choice?Marco
R
1

I believe what you are looking for is the put_multi() method.

From the docs, you can use put_multi() to batch multiple put operations. This will result in a single RPC for the batch rather than one for each of the entities.

Example:

# a list of many entities
user_entities = [ UserEntity(name='user %s' % i) for i in xrange(10000)]
users_keys = ndb.put_multi(user_entities) # keys are in same order as user_entities

Also to note, from the docs is that:

Note: The ndb library automatically batches most calls to Cloud Datastore, so in most cases you don't need to use the explicit batching operations shown below.

That said, you may still, as suggested by , use a task queue (I prefer the deferred library) in order to batch-put a lot of data in the background.

Roden answered 10/5, 2018 at 6:3 Comment(1)
I have not tried this, I will take a look and adjust my answer accordingly. Thanks for your response.Faucher
R
0

As an update to the answer of @JJ Geewax, as of July 1st, 2016 the cost of read and write operations have changed as explained here: https://cloud.google.com/blog/products/gcp/google-cloud-datastore-simplifies-pricing-cuts-cost-dramatically-for-most-use-cases

So writing should have gotten cheaper for the described case, as

writing a single entity only costs 1 write regardless of indexes and will now cost $0.18 per 100,000

Rotifer answered 22/1, 2020 at 12:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.