I am trying to create a Hive table with schema string,string,double on a folder containing two Parquet files. The first parquet file schema is string,string,double and the schema of the second file is string,double,string.
CREATE EXTERNAL TABLE dynschema (
trans_date string,
currency string,
rate double)
STORED AS PARQUET
LOCATION '/user/impadmin/test/parquet/evolution/';
I am trying to use the hive table in pig(0.14) script.
A = LOAD 'dynschema' USING org.apache.hive.hcatalog.pig.HCatLoader();
DUMP A;
But I get the error
java.lang.UnsupportedOperationException: Cannot inspect org.apache.hadoop.hive.serde2.io.DoubleWritable
Which I suspect is due to the schema of the second file is different from the table schema as the first file's split is successfully read but this exception occurs while reading the second file's split.
I also looked into the HCatRecordReader's
code and found this piece of code
DefaultHCatRecord dr = new DefaultHCatRecord(outputSchema.size());
int i = 0;
for (String fieldName : outputSchema.getFieldNames()) {
if (dataSchema.getPosition(fieldName) != null) {
dr.set(i, r.get(fieldName, dataSchema));
} else {
dr.set(i, valuesNotInDataCols.get(fieldName));
}
i++;
}
Here, I see that there is logic of conversion from the data schema to the output schema, but while debugging, I found there is no difference in both the schema.
Please help me to find if,
Pig support such cases of reading data from hive table created over multiple parquet files with different schema.
If yes, how to do this.