I'm using Apache Spark 2.1.1 and I have put the following hive-site.xml on $SPARK_HOME/conf
folder:
<?xml version="1.0"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://mysql_server:3306/hive_metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>${test.tmp.dir}/hadoop-tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://hadoop_namenode:9000/value_iq/hive_warehouse/</value>
<description>Warehouse Location</description>
</property>
</configuration>
When I start the thrift server the metastore schema is created on my MySQL DB but is not used, instead Derby is used.
Could not find any error on the thrift server log file, the only thing that calls my attentions is that it attempts to use MySQL at first (INFO MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
) but then without any error use Derby instead (INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
). This is the thrift server log https://www.dropbox.com/s/rxfwgjm9bdccaju/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-s-master.value-iq.com.out?dl=0
I have no hive installed on my system, I just pretend to use the built in Hive of Apache Spark.
I'm using mysql-connector-java-5.1.23-bin.jar
which is located on $SPARK_HOME/jars
folder.
$SPARK_HOME/conf/hive-site.xml
and$SPARK_HOME/conf/spark-defaults.conf
? Remember I don't have hive installed, I'm using Spark built in Hive. – Brotherly