Export from Aurora Serverless into S3
Asked Answered
G

3

7

I am currently trying to export a .csv file from AWS Aurora Serverless with a MySQL Engine to AWS S3. This works perfectly fine when using an Instance backed Aurora but not with the serverless. I set up IAM Policies for the S3 as written in the documentation and made an IAM User with Full Access to S3 and RDS. The Parameter Group which the Serverless is using has the ARN of the IAM User in the LOAD DATA FROM S3 and SELECT INTO OUTFILE S3 fields.

The Code i am using to Export to S3:

SELECT * FROM TABLE WHERE ID = '6838' INTO OUTFILE S3 's3://bucketname/file.csv';

I've read the documentation here :https://docs.aws.amazon.com/de_de/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html

The Errormessage: Access denied; you need (at least one of) the SELECT INTO S3 privilege(s) for this operation

I expect the Aurora serverless to load as smooth as Aurora with Instances is doing. The Aurora with Instances is using the IAM User which was created for the serverless and is working fine.

EDIT: It seems that AWS Aurora Serverless does not support either IMPORT or EXPORT from and to S3. (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html#aurora-serverless.limitations)

Griceldagrid answered 9/10, 2019 at 14:31 Comment(0)
M
3

Actually you can load your data only in the Aurora Cluster not the serverless one, I had the same issue maybe you can consider integrating your data into an Amazon RDS MySQL database and then let lambda do your data injection from S3 to RDS MySQL.

Mariandi answered 16/1, 2020 at 13:12 Comment(0)
V
1

Did you complete this step (assuming you are issuing that statement from the non-master DB account):

The database user that issues the SELECT INTO OUTFILE S3 statement must be granted the SELECT INTO S3 privilege to issue the statement. The master user name for a DB cluster is granted the SELECT INTO S3 privilege by default. You can grant the privilege to another user by using the following statement.

GRANT SELECT INTO S3 ON *.* TO 'user'@'domain-or-ip-address'
Verlie answered 9/10, 2019 at 14:53 Comment(1)
Life saver. This is exactly what was missing in my setup.Jahveh
D
0

If you're OK with temporarily putting the data on a local or EC2 instance, you can do it in two steps:

  1. Pull the data from the database: mysql --defaults-file=/path/to/.my.cnf -e "SELECT * FROM TABLE WHERE ID = '6838'" | tr '\t' ',' > /local/path/to/file.csv If you're OK with tab-separated output, you don't need to include tr in the pipeline.

  2. Copy the file to S3: aws s3 cp /local/path/to/file.csv s3://bucketname/file.csv

References

Copying to s3

Using MySQL options files

Designation answered 27/1, 2020 at 18:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.