AWS Data Pipeline: Tez fails on simple HiveActivity
Asked Answered
Z

1

6

I'm trying to run simple AWS Data Pipeline for my POC. The case that I have is following: get data from CSV stored on S3, perform simple hive query on them and put results back to S3.

I've created very basic pipeline definition and tried to run it on different emr versions: 4.2.0 and 5.3.1 - both are failing though in different places.

So pipeline definition is following:

{
  "objects": [
    {
      "resourceRole": "DataPipelineDefaultResourceRole",
      "role": "DataPipelineDefaultRole",
      "maximumRetries": "1",
      "enableDebugging": "true",
      "name": "EmrCluster",
      "keyPair": "Jeff Key Pair",
      "id": "EmrClusterId_CM5Td",
      "releaseLabel": "emr-5.3.1",
      "region": "us-west-2",
      "type": "EmrCluster",
      "terminateAfter": "1 Day"
    },
    {
      "column": [
        "policyID INT",
        "statecode STRING"
      ],
      "name": "SampleCSVOutputFormat",
      "id": "DataFormatId_9sLJ0",
      "type": "CSV"
    },
    {
      "failureAndRerunMode": "CASCADE",
      "resourceRole": "DataPipelineDefaultResourceRole",
      "role": "DataPipelineDefaultRole",
      "pipelineLogUri": "s3://aws-logs/datapipeline/",
      "scheduleType": "ONDEMAND",
      "name": "Default",
      "id": "Default"
    },
    {
      "directoryPath": "s3://data-pipeline-input/",
      "dataFormat": {
        "ref": "DataFormatId_KIMjx"
      },
      "name": "InputDataNode",
      "id": "DataNodeId_RyNzr",
      "type": "S3DataNode"
    },
    {
      "s3EncryptionType": "NONE",
      "directoryPath": "s3://data-pipeline-output/",
      "dataFormat": {
        "ref": "DataFormatId_9sLJ0"
      },
      "name": "OutputDataNode",
      "id": "DataNodeId_lnwhV",
      "type": "S3DataNode"
    },
    {
      "output": {
        "ref": "DataNodeId_lnwhV"
      },
      "input": {
        "ref": "DataNodeId_RyNzr"
      },
      "stage": "true",
      "maximumRetries": "2",
      "name": "HiveTest",
      "hiveScript": "INSERT OVERWRITE TABLE ${output1} select policyID, statecode from ${input1};",
      "runsOn": {
        "ref": "EmrClusterId_CM5Td"
      },
      "id": "HiveActivityId_JFqr5",
      "type": "HiveActivity"
    },
    {
      "name": "SampleCSVDataFormat",
      "column": [
        "policyID INT",
        "statecode STRING",
        "county STRING",
        "eq_site_limit FLOAT",
        "hu_site_limit FLOAT",
        "fl_site_limit FLOAT",
        "fr_site_limit FLOAT",
        "tiv_2011 FLOAT",
        "tiv_2012 FLOAT",
        "eq_site_deductible FLOAT",
        "hu_site_deductible FLOAT",
        "fl_site_deductible FLOAT",
        "fr_site_deductible FLOAT",
        "point_latitude FLOAT",
        "point_longitude FLOAT",
        "line STRING",
        "construction STRING",
        "point_granularity INT"
      ],
      "id": "DataFormatId_KIMjx",
      "type": "CSV"
    }
  ],
  "parameters": []
}

And CSV file looks like this:

policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1
448094,FL,CLAY COUNTY,1322376.3,1322376.3,1322376.3,1322376.3,1322376.3,1438163.57,0,0,0,0,30.063936,-81.707664,Residential,Masonry,3
206893,FL,CLAY COUNTY,190724.4,190724.4,190724.4,190724.4,190724.4,192476.78,0,0,0,0,30.089579,-81.700455,Residential,Wood,1

HiveActivity is just a simple query (copy from AWS docs):

"INSERT OVERWRITE TABLE ${output1} select policyID, statecode from ${input1};"

However it fails when running on emr-5.3.1:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
/mnt/taskRunner/./hive-script:617:in `<main>': Error executing cmd: /usr/share/aws/emr/scripts/hive-script "--base-path" "s3://us-west-2.elasticmapreduce/libs/hive/" "--hive-versions" "latest" "--run-hive-script" "--args" "-f"

Going deep into logs I could find following exception:

2017-02-25T00:33:00,434 ERROR [316e5d21-dfd8-4663-a03c-2ea4bae7b1a0 main([])]: tez.DagUtils (:()) - Could not find the jar that was being uploaded
2017-02-25T00:33:00,434 ERROR [316e5d21-dfd8-4663-a03c-2ea4bae7b1a0 main([])]: exec.Task (:()) - Failed to execute tez graph.
java.io.IOException: Previous writer likely failed to write hdfs://ip-170-41-32-05.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/31ae6d21-dfd8-4123-a03c-2ea4bae7b1a0/emr-hive-goodies.jar. Failing because I am unlikely to write too.
    at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1022)
    at org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
    at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
    at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
    at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:294)
    at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155)

When running on emr-4.2.0 I have another crash:

Number of reduce tasks is set to 0 since there's no reduce operator
java.lang.NullPointerException
    at org.apache.hadoop.fs.Path.<init>(Path.java:105)
    at org.apache.hadoop.fs.Path.<init>(Path.java:94)
    at org.apache.hadoop.hive.ql.exec.Utilities.toTempPath(Utilities.java:1517)
    at org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3555)
    at org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3520)

Both S3 and EMR cluster are in same region and running under same AWS account. I've tried bunch of experiments with S3DataNode and EMRCluster configurations but it always crashes. Also I couldn't find any working example of data pipeline with HiveActivity nor in documentation or over github.

Can someone please help me figure it out? Thank you.

Zilvia answered 25/2, 2017 at 2:0 Comment(3)
Did it get resolve? Facing the same error.Sarge
Hey @Sarge I contacted AWS support and they gave me advices that helped to resolve the issue. Here are key points I received from them: 1. Don't use ZIP for input files yet 2. There is an error in EMR setup so you must delete jar file before executing actual HIVE script. Just add following line to your script before actual query: delete jar /mnt/taskRunner/emr-hive-goodies.jar; 3. Make sure you're operating within directories not in the root of the bucket since HIVE doesn't support overwriting in bucket root yet. So just create subfolders and save data inside. All that helped me.Zilvia
Running into the same issue as you, and try you suggested steps. Those didn't work. The data isn't zipped, nor am I writing to s3. I did, however, try delete jar /mnt/taskRunner/emr-hive-goodies.jar, but that didn't work. #47858608Ironwork
K
1

I was facing the same problem when updating my EMR cluster from a 4.*.* release to 5.28.0 release. After changing the release label, I followed @andrii-gorishnii comment and added

delete jar /mnt/taskRunner/emr-hive-goodies.jar;

to the beginning of my Hive Script and it solved my problem! Thanks @andrii-gorishnii

Karlykarlyn answered 25/2, 2020 at 19:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.