How to save DataFrame directly to Hive?
Asked Answered
P

11

107

Is it possible to save DataFrame in spark directly to Hive?

I have tried with converting DataFrame to Rdd and then saving as a text file and then loading in hive. But I am wondering if I can directly save dataframe to hive

Proleg answered 5/6, 2015 at 10:15 Comment(0)
K
131

You can create an in-memory temporary table and store them in hive table using sqlContext.

Lets say your data frame is myDf. You can create one temporary table using,

myDf.createOrReplaceTempView("mytempTable") 

Then you can use a simple hive statement to create table and dump the data from your temp table.

sqlContext.sql("create table mytable as select * from mytempTable");
Keeney answered 18/2, 2016 at 10:12 Comment(12)
This is not a valid HQL statement: cannot recognize input near 'select' '*' 'from' in create table statement; line 1 pos 16Shonna
this got around the parquet read errors I was getting when using write.saveAsTable in spark 2.0Katlynkatmai
No problem. Btw, I just found out you can't use PARTITIONED BY clause in this statement.Mukund
Yes.However, we can use partition by on data frame before creating the temp table. @MukundKeeney
Thanks for this answer. I've tried to do the same thing in my program as well. dataframe.registerTempTable("RiskRecon_tmp") hiveContext.sql("CREATE TABLE IF NOT EXISTS RiskRecon_TOES as select * from RiskRecon_tmp"). But I get this error: java.lang.IllegalArgumentException: Wrong FS: file:/tmp/spark-a68a9fc7-50f3-43ae-ac06-8c07ba7253c2/scratch_hive_2017-07-12_07-12-57_948_8232393446428506434-1, expected: hdfs://nameservice1 at the line where I am passing the query. Do you have any idea regarding this? @VinayKumarHuberty
@HemanthAnnavarapu check this(community.hortonworks.com/content/supportkb/48759/…)Keeney
How were you able to mix and match the temporary table with the hive table? When doing show tables it only includes the hive tables for my spark 2.3.0 installationRattan
this temporary table will be saved to your hive context and doesn't belong to hive tables in any way.Keeney
hi @VinayKumar why you say "If you are using saveAsTable(its more like persisting your dataframe) , you have to make sure that you have enough memory allocated to your spark application". could you explain this point?Trothplight
@Trothplight its irrelevant. I have updated the answer now.Keeney
@VinayKumar : I tried partitioning DF with partitionBy($column) before storing as temp table, but it did not create any partitions in HIVE. Could you please comment on this. ThnxSommers
Hi @VinayKumar how should I import sqlcontext so that I use it this wayFed
S
34

Use DataFrameWriter.saveAsTable. (df.write.saveAsTable(...)) See Spark SQL and DataFrame Guide.

Suite answered 5/6, 2015 at 13:36 Comment(7)
saveAsTable does not create Hive compatible tables. The best solution I found is of Vinay Kumar.Mudslinging
@Jacek: I have added this note myself, because I think my answer is wrong. I would delete it, except that it is accepted. Do you think the note is wrong?Suite
Yes. The note was wrong and that's why I removed it. "Please correct me if I'm wrong" applies here :)Abortive
@DanielDarabos, why "saveAsTable is deprecated and removed in Spark 2.0.0"? I see it is still quite supported and documented in Spark 2.1: spark.apache.org/docs/latest/…Amatory
I think it used to be df.saveAsTable. That is gone now, but there is df.write.saveAsTable. I don't have a Hive installation to test it against, but it does do something, so you're right. I have no clue. Okay, I'll remove the note!Suite
will this df.write().saveAsTable(tableName) also write streaming data into the table?Stedman
no you can't save streaming data with saveAsTable it's not even in the apiUnstopped
K
26

I don't see df.write.saveAsTable(...) deprecated in Spark 2.0 documentation. It has worked for us on Amazon EMR. We were perfectly able to read data from S3 into a dataframe, process it, create a table from the result and read it with MicroStrategy. Vinays answer has also worked though.

Kyser answered 22/12, 2016 at 10:2 Comment(2)
Somebody flagged this answer as low-quality due to length and content. To be honest it probably would have been better as a comment. I guess it's been up for two years and some people have found it helpful so might be good to leave things as is?Arsenopyrite
I agree, comment would have been the better choice. Lesson learned :-)Kyser
M
17

you need to have/create a HiveContext

import org.apache.spark.sql.hive.HiveContext;

HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc());

Then directly save dataframe or select the columns to store as hive table

df is dataframe

df.write().mode("overwrite").saveAsTable("schemaName.tableName");

or

df.select(df.col("col1"),df.col("col2"), df.col("col3")) .write().mode("overwrite").saveAsTable("schemaName.tableName");

or

df.write().mode(SaveMode.Overwrite).saveAsTable("dbName.tableName");

SaveModes are Append/Ignore/Overwrite/ErrorIfExists

I added here the definition for HiveContext from Spark Documentation,

In addition to the basic SQLContext, you can also create a HiveContext, which provides a superset of the functionality provided by the basic SQLContext. Additional features include the ability to write queries using the more complete HiveQL parser, access to Hive UDFs, and the ability to read data from Hive tables. To use a HiveContext, you do not need to have an existing Hive setup, and all of the data sources available to a SQLContext are still available. HiveContext is only packaged separately to avoid including all of Hive’s dependencies in the default Spark build.


on Spark version 1.6.2, using "dbName.tableName" gives this error:

org.apache.spark.sql.AnalysisException: Specifying database name or other qualifiers are not allowed for temporary tables. If the table name has dots (.) in it, please quote the table name with backticks ().`

Madalynmadam answered 19/7, 2016 at 20:11 Comment(2)
Is the second command: 'df.select(df.col("col1"),df.col("col2"), df.col("col3")) .write().mode("overwrite").saveAsTable("schemaName.tableName");' requiring that the selected columns that you intend to overwrite already exist in the table? So you have the existing table and you only overwrite the existing columns 1,2,3 with the new data from your df in spark? is that interpreted right?Casia
df.write().mode... needs to be changed to df.write.mode...Jostle
S
12

Sorry writing late to the post but I see no accepted answer.

df.write().saveAsTable will throw AnalysisException and is not HIVE table compatible.

Storing DF as df.write().format("hive") should do the trick!

However, if that doesn't work, then going by the previous comments and answers, this is what is the best solution in my opinion (Open to suggestions though).

Best approach is to explicitly create HIVE table (including PARTITIONED table),

def createHiveTable: Unit ={
spark.sql("CREATE TABLE $hive_table_name($fields) " +
  "PARTITIONED BY ($partition_column String) STORED AS $StorageType")
}

save DF as temp table,

df.createOrReplaceTempView("$tempTableName")

and insert into PARTITIONED HIVE table:

spark.sql("insert into table default.$hive_table_name PARTITION($partition_column) select * from $tempTableName")
spark.sql("select * from default.$hive_table_name").show(1000,false)

Offcourse the LAST COLUMN in DF will be the PARTITION COLUMN so create HIVE table accordingly!

Please comment if it works! or not.


--UPDATE--

df.write()
  .partitionBy("$partition_column")
  .format("hive")
  .mode(SaveMode.append)
  .saveAsTable($new_table_name_to_be_created_in_hive)  //Table should not exist OR should be a PARTITIONED table in HIVE
Sommers answered 20/4, 2020 at 16:1 Comment(0)
C
9

Saving to Hive is just a matter of using write() method of your SQLContext:

df.write.saveAsTable(tableName)

See https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/DataFrameWriter.html#saveAsTable(java.lang.String)

From Spark 2.2: use DataSet instead DataFrame.

Cue answered 7/6, 2017 at 11:45 Comment(2)
I seem to have an error which states Job aborted. I tried the following code pyspark_df.write.mode("overwrite").saveAsTable("InjuryTab2")Spurling
Hi! why this? From Spark 2.2: use DataSet instead DataFrame.Bremble
I
4

For Hive external tables I use this function in PySpark:

def save_table(sparkSession, dataframe, database, table_name, save_format="PARQUET"):
    print("Saving result in {}.{}".format(database, table_name))
    output_schema = "," \
        .join(["{} {}".format(x.name.lower(), x.dataType) for x in list(dataframe.schema)]) \
        .replace("StringType", "STRING") \
        .replace("IntegerType", "INT") \
        .replace("DateType", "DATE") \
        .replace("LongType", "INT") \
        .replace("TimestampType", "INT") \
        .replace("BooleanType", "BOOLEAN") \
        .replace("FloatType", "FLOAT")\
        .replace("DoubleType","FLOAT")
    output_schema = re.sub(r'DecimalType[(][0-9]+,[0-9]+[)]', 'FLOAT', output_schema)

    sparkSession.sql("DROP TABLE IF EXISTS {}.{}".format(database, table_name))

    query = "CREATE EXTERNAL TABLE IF NOT EXISTS {}.{} ({}) STORED AS {} LOCATION '/user/hive/{}/{}'" \
        .format(database, table_name, output_schema, save_format, database, table_name)
    sparkSession.sql(query)
    dataframe.write.insertInto('{}.{}'.format(database, table_name),overwrite = True)
Insomniac answered 17/6, 2019 at 9:17 Comment(1)
This is great. Thank you!Deaminate
C
3

You could use Hortonworks spark-llap library like this

import com.hortonworks.hwc.HiveWarehouseSession

df.write
  .format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
  .mode("append")
  .option("table", "myDatabase.myTable")
  .save()
Caroleecarolin answered 8/9, 2020 at 9:52 Comment(0)
R
2

If you want to create a hive table(which does not exist) from a dataframe (some times it fails to create with DataFrameWriter.saveAsTable). StructType.toDDL will helps in listing the columns as a string.

val df = ...

val schemaStr = df.schema.toDDL # This gives the columns 
spark.sql(s"""create table hive_table ( ${schemaStr})""")

//Now write the dataframe to the table
df.write.saveAsTable("hive_table")

hive_table will be created in default space since we did not provide any database at spark.sql(). stg.hive_table can be used to create hive_table in stg database.

Repute answered 11/3, 2020 at 10:4 Comment(1)
Detailed example found here: https://mcmap.net/q/205225/-how-to-create-hive-table-from-spark-data-frame-using-its-schemaRepute
H
1

Here is PySpark version to create Hive table from parquet file. You may have generated Parquet files using inferred schema and now want to push definition to Hive metastore. You can also push definition to the system like AWS Glue or AWS Athena and not just to Hive metastore. Here I am using spark.sql to push/create permanent table.

   # Location where my parquet files are present.
    df = spark.read.parquet("s3://my-location/data/")
    cols = df.dtypes
    buf = []
    buf.append('CREATE EXTERNAL TABLE test123 (')
    keyanddatatypes =  df.dtypes
    sizeof = len(df.dtypes)
    print ("size----------",sizeof)
    count=1;
    for eachvalue in keyanddatatypes:
        print count,sizeof,eachvalue
        if count == sizeof:
            total = str(eachvalue[0])+str(' ')+str(eachvalue[1])
        else:
            total = str(eachvalue[0]) + str(' ') + str(eachvalue[1]) + str(',')
        buf.append(total)
        count = count + 1

    buf.append(' )')
    buf.append(' STORED as parquet ')
    buf.append("LOCATION")
    buf.append("'")
    buf.append('s3://my-location/data/')
    buf.append("'")
    buf.append("'")
    ##partition by pt
    tabledef = ''.join(buf)

    print "---------print definition ---------"
    print tabledef
    ## create a table using spark.sql. Assuming you are using spark 2.1+
    spark.sql(tabledef);
Hypertension answered 22/4, 2018 at 6:5 Comment(0)
Y
1

In my case this works fine:

from pyspark_llap import HiveWarehouseSession
hive = HiveWarehouseSession.session(spark).build()
hive.setDatabase("DatabaseName")
df = spark.read.format("csv").option("Header",True).load("/user/csvlocation.csv")
df.write.format(HiveWarehouseSession().HIVE_WAREHOUSE_CONNECTOR).option("table",<tablename>).save()

Done!!

You can read the Data, let you give as "Employee"

hive.executeQuery("select * from Employee").show()

For more details use this URL: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/integrating-hive/content/hive-read-write-operations.html

Yours answered 10/2, 2020 at 10:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.