I have created a pyspark script(glue job) and trying it to run through EC2 instance with the cli command aws glue start-job-run --arguments (Here I am passing list of argument). I have tried both the short-hand syntax and json syntax to pass the arguments with the above cli command but I am getting error "GlueArgumentError: argument --input_file_path is required" (input file path is the argument I am trying to access in the pyspark script as given below)
spark = SparkSession.builder.getOrCreate()
args = getResolvedOptions(sys.argv, ['input_file_path', 'CONFIG_FILE_PATH', 'SELECTED_RECORD_FILE_PATH', 'REJECTED_RECORD_FILE_PATH']
The cli commands which I used to run the job are as below:
1] aws glue start-job-run --job-name dsb_clng_and_vldtn --arguments input_file_path="s3://dsb-lfnsrn-001/lndg/data/CompanyData_UK.csv"
2] aws glue start-job-run --job-name dsb_clng_and_vldtn --arguments "file://$JSON_FILES_PATH/job_arguments_list.json"
(JSON_FILES_PATH is shell variable)
In the method 2] I used the json syntax to execute the job. The json file content is as below :
{
"input_file_path":"s3://dsb-lfnsrn-001/lndg/data/CompanyData_UK.csv",
"CONFIG_FILE_PATH":"s3://htcdsb-dev/wrkspc/src/dsb-lfnsrn-001-config.json",
"SELECTED_RECORD_FILE_PATH":"s3://dsb-lfnsrn-001/pckpby/processed/Valid_UK.csv",
"REJECTED_RECORD_FILE_PATH":"s3://dsb-lfnsrn-001/pckpby/processed/Invalid_UK.csv"
}
Please advice me as I am struggling to resolve above issue from several hours.