First and foremost: There is no concept of primary key/index concept in BigQuery. So you can't update a record using its "keys" as it will result in full table scan(sure you can if you have have a bunch of money to throw away). Consider sinking to Bigquery just like an append only tape. If you have a need to analyze the latest state of a record, you will have to resort to other strategies, for example, a scheduled merge query get the latest record for a "key" from a staging table and update the reporting table
READ ON..
This may help a bit in deciding between different datastore solutions that Google cloud offers (Disclaimer! Copied from Google Cloud page)
If your requirement is a live database, BigTable is what you need (Not really an OLTP system though). If it is more of an analytics kind of purpose, then BigQuery is what you need!
Think of OLTP vs OLAP; Or if you are familiar with Cassandra vs Hadoop, BigTable roughly equates to Cassandra, BigQuery roughly equates to Hadoop (Agreed, it's not a fair comparison, but you get the idea)
https://cloud.google.com/images/storage-options/flowchart.svg
Note
Please keep in mind that Bigtable is not a relational database and it does not support SQL queries or JOIN
s, nor does it support multi-row transactions. Also, it is not a good solution for small amounts of data. If you want an RDBMS OLTP, you might need to look at cloudSQL (mysql/ postgres) or spanner.
Cost Perspective
https://mcmap.net/q/205226/-google-bigtable-vs-bigquery-for-storing-large-number-of-events. Quoting the relevant parts here.
The overall cost boils down to how often you will 'query' the data. If
it's a backup and you don't replay events too often, it'll be dirt
cheap. However, if you need to replay it daily once, you will start
triggering the 5$/TB scanned very easily. We were surprised too how
cheap inserts and storage were, but this is ofc because Google expects
you to run expensive queries at some point in time on them. You'll
have to design around a few things though. E.g. AFAIK streaming
inserts have no guarantees of being written to the table and you have
to poll frequently on tail of list to see if it was really written.
Tailing can be done efficiently with time range table decorator,
though (not paying for scanning whole dataset).
If you don't care about order, you can even list a table for free. No
need to run a 'query' then.
Edit 1
Cloud spanner is relatively young, but is powerful and promising. At least, google marketing claims that it's features are best of both worlds (Traditional RDBMS and noSQL)