Glue Job Succeeded but no data inserted into the target bucket
Asked Answered
P

1

0

I have used the new AWS Glue Studio visual tool to just try run a very simple SQL query, with Source as a Catalog Table, Transform as a simple SparkSQL, and Target as a CSV file(s) in an s3 bucket.

Each time I run the code, it succeeds but nothing is stored in the bucket, not even an empty CSV file.

Not sure if this is a SparkSQL problem, or an AWS Glue problem.

Here is the automatically generated code :

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue import DynamicFrame


def sparkSqlQuery(glueContext, query, mapping, transformation_ctx) -> DynamicFrame:
    for alias, frame in mapping.items():
        frame.toDF().createOrReplaceTempView(alias)
    result = spark.sql(query)
    return DynamicFrame.fromDF(result, glueContext, transformation_ctx)


args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)

# Script generated for node Data_Catalog_0
Data_Catalog_0_node1 = glueContext.create_dynamic_frame.from_catalog(
    database="some_long_name_data_base_catalog",
    table_name="catalog_table",
    transformation_ctx="Data_Catalog_0_node1",
)

# Script generated for node ApplyMapping
SqlQuery0 = """
SELECT DISTINCT  "ID"
    FROM myDataSource
"""
ApplyMapping_node2 = sparkSqlQuery(
    glueContext,
    query=SqlQuery0,
    mapping={"myDataSource": Data_Catalog_0_node1},
    transformation_ctx="ApplyMapping_node2",
)

# Script generated for node Amazon S3
AmazonS3_node166237 = glueContext.write_dynamic_frame.from_options(
    frame=ApplyMapping_node2,
    connection_type="s3",
    format="csv",
    connection_options={
        "path": "s3://target_bucket/results/",
        "partitionKeys": [],
    },
    transformation_ctx="AmazonS3_node166237",
)

job.commit()

This is very similar to this question, I am kind of reposting it, because I am unable to comment on it due to the low points, and although 4 Months old, still unanswered.

Piperine answered 7/9, 2022 at 7:10 Comment(2)
Glue job generates lots of logs. Did you inspect any of them to try to find issue?Commodus
@Commodus Yeah, I checked it, but I don't seem to see any indicative lines, OR I can't make sense of them, anyhow, here are some lines that you might find interesting, that I find almost always when I run Glue Jobs main WARN JNDI lookup class is not available because this JRE does not support JNDI. JNDI string lookups will not be available, continuing configuration. java.lang.ClassNotFoundException: org.apache.logging.log4 and main INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, ...etcPiperine
P
1

The problem seems to be the double-quotes of the selected fields in the SQL query. Dropping them solved the issue.

In other words, I "wrongly" used this query syntax:

SELECT DISTINCT  "ID"
    FROM myDataSource

instead of this "correct" one :

SELECT DISTINCT  ID
    FROM myDataSource

There is no mention of it in the Spark SQL Syntax documentation

Piperine answered 9/9, 2022 at 6:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.