External Hive Table Refresh table vs MSCK Repair
Asked Answered
T

2

16

I have external hive table stored as Parquet, partitioned on a column say as_of_dt and data gets inserted via spark streaming. Now Every day new partition get added. I am doing msck repair table so that the hive metastore gets the newly added partition info. Is this the only way or is there a better way? I am concerned if downstream users querying the table, will msck repair cause any issue in non availability of data or stale data? I was going through the HiveContext API and see refreshTable option. Any idea if this makes sense to use refreshTable instead ?

Turnbull answered 6/8, 2018 at 17:40 Comment(2)
Any chance your Spark Streaming app could simply ALTER TABLE ADD PARTITION when creating a new directory? I know Spark has limited support of Hive SQL, but there are workarounds e.g. using WebHCat...Supereminent
the streaming app uses dataframe.write.mode("append").partitionBy(partitionKeys).parquet(baseDirectory) spark internally takes care of adding the external parquet folder . Just wondering if msck repair is the only option.Turnbull
T
22

To directly answer your question msck repair table, will check if partitions for a table is active. Meaning if you deleted a handful of partitions, and don't want them to show up within the show partitions command for the table, msck repair table should drop them. Msck repair could take more time than an invalidate or refresh statement, however Invalidate Metadata only runs within Hive updating only the Hive Metastore. Refresh runs only in Spark SQL and updates the Spark metadata store.

Hive metastore should be fine if you are completing the add partition step somewhere in the processing, however if you ever want to access the hive table through Spark SQL you will need to update the metadata through Spark (or Impala or another process that updates the spark metadata).

Anytime you update or change the contents of a hive table, the Spark metastore can fall out of sync, causing you to be unable to query the data through the spark.sql command set. Meaning if you want to query that data you need to keep the Spark metastore in sync.

If you have a Spark version that allows for it, you should refresh and add partitions to Hive tables within Spark, so all metastores are in sync. Below is how I do it:

//Non-Partitioned Table
outputDF.write.format("parquet").mode("overwrite").load(fileLocation)
spark.sql("refresh table " + tableName)

//Partitioned Table
outputDF.write.format("parquet").mode("overwrite").load(fileLocation + "/" + partition)
val addPartitionsStatement = "alter table" + tableName = " add if not exists partition(partitionKey='" + partition + "') location '" + fileLocation + "/" + partition + "'"
spark.sql(addPartitionsStatement)
spark.sql("refresh table " + tableName)
Testudinal answered 6/8, 2018 at 22:37 Comment(1)
Thanks a lot afeldman . Currently i have a table with some columns and as_of_dt as partition . So every day new records gets added and since partition value changes it goes to a new partition . I am doing dataframe.write.mode("append").partitionBy(partitionKeys).parquet(baseDirectory) followed by MSCK REPAIR TABLE . So will i get any benefit if i do alter table add if not exists partition and refresh table instead of msck repair?Turnbull
B
5

It looks like refreshTable does refresh the cached metadata, not affecting Hive metadata.

Doc says:

Invalidate and refresh all the cached the metadata of the given table. For performance reasons, Spark SQL or the external data source library it uses might cache certain metadata about a table, such as the location of blocks. When those change outside of Spark SQL, users should call this function to invalidate the cache.

Method does not update Hive metadata, so repair is necessary.

Bee answered 6/8, 2018 at 18:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.