AWS Glue Crawler Not Creating Table
Asked Answered
F

8

47

I have a crawler I created in AWS Glue that does not create a table in the Data Catalog after it successfully completes.

The crawler takes roughly 20 seconds to run and the logs show it successfully completed. CloudWatch log shows:

  • Benchmark: Running Start Crawl for Crawler
  • Benchmark: Classification Complete, writing results to DB
  • Benchmark: Finished writing to Catalog
  • Benchmark: Crawler has finished running and is in ready state

I am at a loss as to why the tables in the data catalog are not being created. AWS Docs are not of much help debugging.

Fluoroscope answered 1/11, 2017 at 17:2 Comment(2)
Did you find an answer to this?Macaroni
I am facing the same with the root user that has all access for all services, I dont understand what is wrong!Delafuente
E
57

check the IAM role associated with the crawler. Most likely you don't have correct permission.

When you create the crawler, if you choose to create an IAM role(the default setting), then it will create a policy for S3 object you specified only. if later you edit the crawler and change the S3 path only. The role associated with the crawler won't have permission to the new S3 path.

Equate answered 10/1, 2018 at 22:21 Comment(6)
The default glue service role includes only S3 prefixes like glue-public, I needed to change it to include the bucket I wanted to crawl.Benzaldehyde
Any idea why this incorrect permission doesn't appear as an exception in the logs?Correlation
This worked for me, I deleted the old role and edited the crawler and created a new one, tables were then created in the catalog, appreciate the tip!Contactor
Thanks for this one. I spent 30 minutes checking logs and failed to understand what was happening. This was on point... <3Urushiol
Wow. Reason number 953 why AWS is the opposite of easy to use. How difficult is this to fix?Chappelka
Your answer saved me a lot of time, we should write a song in your name. In my case, I had changed the location of my csv (that crawler reads). I restarted the crawler with my csv in new location, however since it was existing iam role (it was the permissions issue). I created a new IAM role when I edited the crawler and it did the job.Malinger
A
5

I had the same issue, as advised by others I tried to revise the existing IAM role, to include the new S3 bucket as the resource, but for some reason it did not work. Then I created a completely new role from scratch... this time it worked. Also, one big question I have for AWS is "why this access denied error due to a wrong attached IAM policy does not show up in Cloud watch log??" That makes it difficult to debug.

Anachronistic answered 1/4, 2020 at 14:35 Comment(0)
C
2

If you have existing tables in the target database the crawler may associate your new files with the existing table rather than create a new one.

This occurs when there are similarities in the data or a folder structure that the Glue may interpret as partitioning.

Also on occasion I have needed to refresh the table listing of a database to get new ones to show up.

Cannonade answered 3/5, 2018 at 0:11 Comment(0)
M
2

I had a similar IAM issue as mentioned by Ray. But in my case, I did not add an asterisk (*) after the bucket name, which means the crawler did not go into the subfolders, and no table was created.

Wrong:

{
   "Statement": [
    {
        "Action": [
            "s3:GetObject",
            "s3:PutObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:::bucket-name"
        ]
    }
   ],
   "Version": "2012-10-17"
}

Correct:

{
   "Statement": [
    {
        "Action": [
            "s3:GetObject",
            "s3:PutObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:::bucket-name*"
        ]
    }
   ],
   "Version": "2012-10-17"
}
Mustang answered 26/3, 2021 at 8:7 Comment(0)
O
1

You can try excluding some files in the s3 bucket, and those excluded files should appear in the log. I find it helpful in debugging what's happening with the crawler.

Obie answered 30/7, 2018 at 18:45 Comment(0)
C
1

In my case, the problem was in the setting Crawler source type > Repeat crawls of S3 data stores, which I've set to Crawl new folders only, because I thought it will crawl everything for the first run, and then continue to discover only new data.

After setting it to Crawl all folders it discovered all tables.

Curch answered 4/5, 2022 at 18:41 Comment(0)
S
0

Here is my sample role JSON that allows glue to access s3 and create a table.

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": [
            "ec2:DeleteTags",
            "ec2:CreateTags"
        ],
        "Resource": [
            "arn:aws:ec2:*:*:instance/*",
            "arn:aws:ec2:*:*:security-group/*",
            "arn:aws:ec2:*:*:network-interface/*"
        ],
        "Condition": {
            "ForAllValues:StringEquals": {
                "aws:TagKeys": "aws-glue-service-resource"
            }
        }
    },
    {
        "Sid": "VisualEditor1",
        "Effect": "Allow",
        "Action": [
            "iam:GetRole",
            "cloudwatch:PutMetricData",
            "ec2:DeleteNetworkInterface",
            "s3:ListBucket",
            "s3:GetBucketAcl",
            "logs:PutLogEvents",
            "ec2:DescribeVpcAttribute",
            "glue:*",
            "ec2:DescribeSecurityGroups",
            "ec2:CreateNetworkInterface",
            "s3:GetObject",
            "s3:PutObject",
            "logs:CreateLogStream",
            "s3:ListAllMyBuckets",
            "ec2:DescribeNetworkInterfaces",
            "logs:AssociateKmsKey",
            "ec2:DescribeVpcEndpoints",
            "iam:ListRolePolicies",
            "s3:DeleteObject",
            "ec2:DescribeSubnets",
            "iam:GetRolePolicy",
            "s3:GetBucketLocation",
            "ec2:DescribeRouteTables"
        ],
        "Resource": "*"
    },
    {
        "Sid": "VisualEditor2",
        "Effect": "Allow",
        "Action": "s3:CreateBucket",
        "Resource": "arn:aws:s3:::aws-glue-*"
    },
    {
        "Sid": "VisualEditor3",
        "Effect": "Allow",
        "Action": "logs:CreateLogGroup",
        "Resource": "*"
    }
]

}

Sennit answered 15/4, 2019 at 12:30 Comment(0)
B
0

Encountered the same problem. I created a new crawler and a new IAM role but still used the same database and it worked!

Benedic answered 13/9, 2022 at 22:30 Comment(1)
PS. you can also try adjusting the maximum threshold for the tables. I adjusted that too.Benedic

© 2022 - 2024 — McMap. All rights reserved.