How to perform database schema migrations after deploying with AWS CDK?
Asked Answered
A

2

8

I'm running an Aurora PostgreSQL (Serverless) cluster. After I deploy the infrastructure for the first time, and every time I re-deploy, I want to run database schema migrations (add tables, add columns).

How can I accomplish this?

Lambda is out of the question, as migrations may run for a long time.

Edit: clarified about schema migration

Thanks!

Aney answered 25/7, 2020 at 13:18 Comment(1)
I would and will use this github.com/golang-migrate/migrate and execute the migration inside my CI/CD pipeline. I haven't build it yet so I can't give any specifics, it's next on my list after building the CI/CD in cdk with the new pipelines constructDepression
X
0

If you're looking for an example on migrating database schema in Aurora using custom resources. See a detailed example in this repository.

Xavierxaviera answered 14/11, 2020 at 0:2 Comment(1)
I think using Lambda is ok, but what about timeouts? What if the migration is expected to take a long time? I’ve settled on a similar approach, but using a fargate task instead. Thank you for your suggestion.Aney
T
0

If you want to home-roll your own thing, consider throwing a bunch of scripts into a folder called migrations/.

Have the scripts have alphabetical order and predictable filenames, e.g. 01_create_users_table.ts.

Then have a wrapper script (e.g. migrate.ts) that goes through these files, creates a key=>value map of filename=>function of these scripts.

Finally the scripts needs to keep track of which file was already executed (so it can skip it). You can have a utility DynamoDB table called "migrations" for this, where the script just logs the filename of every executed migration file.

Not exactly rocket science.

Trager answered 27/3 at 21:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.