How to properly manage database deployment with SSDT and Visual Studio 2012 Database Projects?
Asked Answered
C

4

63

I'm in the research phase trying to adopt 2012 Database Projects on an existing small project. I'm a C# developer, not a DBA, so I'm not particularly fluent with best practices. I've been searching google and stackoverflow for a few hours now but I still don't know how to handle some key deployment scenarios properly.

1) Over the course of several development cycles, how do I manage multiple versions of my database? If I have a client on v3 of my database and I want to upgrade them to v8, how do I manage this? We currently manage hand-crafted schema and data migration scripts for every version of our product. Do we still need to do this separately or is there something in the new paradigm that supports or replaces this?

2) If the schema changes in such a way that requires data to be moved around, what is the best way to handle this? I assume some work goes in the Pre-Deployment script to preserve the data and then the Post-Deploy script puts it back in the right place. Is that the way of it or is there something better?

3) Any other advice or guidance on how best to work with these new technologies is also greately appreciated!

UPDATE: My understanding of the problem has grown a little since I originally asked this question and while I came up with a workable solution, it wasn't quite the solution I was hoping for. Here's a rewording of my problem:

The problem I'm having is purely data related. If I have a client on version 1 of my application and I want to upgrade them to version 5 of my application, I would have no problems doing so if their database had no data. I'd simply let SSDT intelligently compare schemas and migrate the database in one shot. Unfortunately clients have data so it's not that simple. Schema changes from version 1 of my application to version 2 to version 3 (etc) all impact data. My current strategy for managing data requires I maintain a script for each version upgrade (1 to 2, 2 to 3, etc). This prevents me from going straight from version 1 of my application to version 5 because I have no data migration script to go straight there. The prospect creating custom upgrade scripts for every client or managing upgrade scripts to go from every version to every greater version is exponentially unmanageable. What I was hoping was that there was some sort of strategy SSDT enables that makes managing the data side of things easier, maybe even as easy as the schema side of things. My recent experience with SSDT has not given me any hope of such a strategy existing but I would love to find out differently.

Cohleen answered 13/3, 2013 at 15:2 Comment(4)
You may find the Schema Compare feature useful for keeping schemas in sync. Link 1 and Link 2 msdn.microsoft.com/en-us/library/aa833202(v=vs.80).aspxStadiometer
The schema compare tool is great, but it doesn't help in any way with data. Certain schema changes will require custom effort to preserve and migrate data and the schema compare tool doesn't support that. I think that's why the Pre- and Post-Deploy scripts exist, to help with things of this nature.Cohleen
You may want to consider snapshots of your project so you can release those in certain order. Snapshot v1, v2, v3, etc. Apply each in order so you don't lose anything when you have a release package. You could query a table in the database to figure out the "current" version, and start with v+1 through a batch or powershell script. Post-Deploy script could handle setting that version in your table.Tertullian
Wish I'd found this question earlier! Data is the forgotten story as far as I can tell with SSDT, which is strange as that is the whole point of a database. Sometimes feels like they went for the stuff that looks flashy in demos :(Grisly
S
60

I've been working on this myself, and I can tell you it's not easy.

First, to address the reply by JT - you cannot dismiss "versions", even with declarative updating mechanics that SSDT has. SSDT does a "pretty decent" job (provided you know all the switches and gotchas) of moving any source schema to any target schema, and it's true that this doesn't require verioning per se, but it has no idea how to manage "data motion" (at least not that i can see!). So, just like DBProj, you left to your own devices in Pre/Post scripts. Because the data motion scripts depend on a known start and end schema state, you cannot avoid versioning the DB. The "data motion" scripts, therefore, must be applied to a versioned snapshot of the schema, which means you cannot arbitrarily update a DB from v1 to v8 and expect the data motion scripts v2 to v8 to work (presumably, you wouldn't need a v1 data motion script).

Sadly, I can't see any mechanism in SSDT publishing that allows me to handle this scenario in an integrated way. That means you'll have to add your own scafolding.

The first trick is to track versions within the database (and SSDT project). I started using a trick in DBProj, and brought it over to SSDT, and after doing some research, it turns out that others are using this too. You can apply a DB Extended Property to the database itself (call it "BuildVersion" or "AppVersion" or something like that), and apply the version value to it. You can then capture this extended property in the SSDT project itself, and SSDT will add it as a script (you can then check the publish option that includes extended properties). I then use SQLCMD variables to identify the source and target versions being applied in the current pass. Once you identify the delta of versions between the source (project snapshot) and target (target db about to be updated), you can find all the snapshots that need to be applied. Sadly, this is tricky to do from inside the SSDT deployment, and you'll probably have to move it to the build or deployment pipeline (we use TFS automated deployments and have custom actions to do this).

The next hurdle is to keep snapshots of the schema with their associated data motion scripts. In this case, it helps to make the scripts as idempotent as possible (meaning, you can rerun the scripts without any ill side-effects). It helps to split scripts that can safely be rerun from scripts that must be executed one time only. We're doing the same thing with static reference data (dictionary or lookup tables) - in other words, we have a library of MERGE scripts (one per table) that keep the reference data in sync, and these scripts are included in the post-deployment scripts (via the SQLCMD :r command). The important thing to note here is that you must execute them in the correct order in case any of these reference tables have FK references to each other. We include them in the main post-deploy script in order, and it helps that we created a tool that generates these scripts for us - it also resolves dependency order. We run this generation tool at the close of a "version" to capture the current state of the static reference data. All your other data motion scripts are basically going to be special-case and most likely will be single-use only. In that case, you can do one of two things: you can use an IF statement against the db build/app version, or you can wipe out the 1 time scripts after creating each snapshot package.

It helps to remember that SSDT will disable FK check constraints and only re-enable them after the post-deployment scripts run. This gives you a chance to populate new non-null fields, for example (by the way, you have to enable the option to generate temporary "smart" defaults for non-null columns to make this work). However, FK check constraints are only disabled for tables that SSDT is recreating because of a schema change. For other cases, you are responsible for ensuring that data motion scripts run in the proper order to avoid check constraints complaints (or you manually have disable/re-enable them in your scripts).

DACPAC can help you because DACPAC is essentially a snapshot. It will contain several XML files describing the schema (similar to the build output of the project), but frozen in time at the moment you create it. You can then use SQLPACKAGE.EXE or the deploy provider to publish that package snapshot. I haven't quite figured out how to use the DACPAC versioning, because it's more tied to "registered" data apps, so we're stuck with our own versioning scheme, but we do put our own version info into the DACPAC filename.

I wish I had a more conclusive and exhasutive example to provide, but we're still working out the issues here too.

One thing that really sucks about SSDT is that unlike DBProj, it's currently not extensible. Although it does a much better job than DBProj at a lot of different things, you can't override its default behavior unless you can find some method inside of pre/post scripts of getting around a problem. One of the issues we're trying to resolve right now is that the default method of recreating a table for updates (CCDR) really stinks when you have tens of millions of records.

-UPDATE: I haven't seen this post in some time, but apparently it's been active lately, so I thought I'd add a couple of important notes: if you are using VS2012, the June 2013 release of SSDT now has a Data Comparison tool built-in, and also provides extensibility points - that is to say, you can now include Build Contributors and Deployment Plan Modifiers for the project.

Sabra answered 30/4, 2013 at 23:37 Comment(2)
Could you explain how to use the new feature Deployment Plan Modifiers ?Persinger
The project has evolved and you can extend the deployment by implementing your DeploymentContributors (see the.agilesql.club/Blogs/Ed-Elliott/…)Allantois
C
9

I haven't really found any more useful information on the subject but I've spent some time getting to know the tools, tinkering and playing, and I think I've come up with some acceptable answers to my question. These aren't necessarily the best answers. I still don't know if there are other mechanisms or best practices to better support these scenarios, but here's what I've come up with:

The Pre- and Post-Deploy scripts for a given version of the database are only used migrate data from the previous version. At the start of every development cycle, the scripts are cleaned out and as development proceeds they get fleshed out with whatever sql is needed to safely migrate data from the previous version to the new one. The one exception here is static data in the database. This data is known at design time and maintains a permanent presence in the Post-Deploy scripts in the form of T-SQL MERGE statements. This helps make it possible to deploy any version of the database to a new environment with just the latest publish script. At the end of every development cycle, a publish script is generated from the previous version to the new one. This script will include generated sql to migrate the schema and the hand crafted deploy scripts. Yes, I know the Publish tool can be used directly against a database but that's not a good option for our clients. I am also aware of dacpac files but I'm not really sure how to use them. The generated publish script seems to be the best option I know for production upgrades.

So to answer my scenarios:

1) To upgrade a database from v3 to v8, I would have to execute the generated publish script for v4, then for v5, then for v6, etc. This is very similar to how we do it now. It's well understood and Database Projects seem to make creating/maintaining these scripts much easier.

2) When the schema changes from underneath data, the Pre- and Post-Deploy scripts are used to migrate the data to where it needs to go for the new version. Affected data is essentially backed-up in the Pre-Deploy script and put back into place in the Post-Deploy script.

3) I'm still looking for advice on how best to work with these tools in these scenarios and others. If I got anything wrong here, or if there are any other gotchas I should be aware of, please let me know! Thanks!

Cohleen answered 14/3, 2013 at 19:28 Comment(0)
P
4

In my experience of using SSDT the notion of version numbers (i.e. v1, v2...vX etc...) for databases kinda goes away. This is because SSDT offers a development paradigm known as declarative database development which loosely means that you tell SSDT what state you want your schema to be in and then let SSDT take responsibility for getting it into that state by comparing against what you already have. In this paradigm the notion of deploying v4 then v5 etc.... goes away.

Your pre and post deployment scripts, as you correctly state, exist for the purposes of managing data.

Hope that helps.

JT

Playroom answered 3/4, 2013 at 8:23 Comment(6)
Application code is versioned and it expects a specific database schema. Therefore, the schema is essentially versioned. One could use SSDT as you describe to migrate the schema directly to any form and drop application binaries to match, but how do you migrate data in this scenario? Nothing knows how to automatically migrate data so you have to tell it how. Pre/Post deployment scripts must therefore be hand written. The only manageable strategy I've found is as I described in my answer, but it completely prevents direct upgrades that skip intervening versions. Unless I'm missing something...Cohleen
Hi @darkmyst, I always write pre/post deployment scripts so that they work in the same idempotent manner as the declared schema. Now, this is admittedly not an easy thing to do but I haven't (yet) come across a scenario that requires one to deploy interim versions. Yes, the scripts need to be cognizant of all the schema changes in those interim versions and that's why they are difficult to write - however I do feel that requiring deployment of the interim versions negates the declarative paradigm of SSDT, which to my mind is the whole point of using it. Just my two-penneth.Playroom
It seems like the more releases you have the more of a burden needing to account for the interim release changes become. Tracking all those interim changes and testing the upgrade path for every single version seems exponentially tedious and risky. Executing a lengthy series of distinct, well-tested upgrade scripts in an automated fashion seems like a worth-while trade-off to avoid that risk. Still, I'm looking into Declarative Database Development and would like to see an example of how a deployment script such as yours is developed. Thanks for your answers!Cohleen
I certainly agree that testing your upgrades is a worthwhile endeavour and that doesn't go away just because you happen to use the declarative paradigm - that is something that I preach to the teams I work with. In fact I'd argue that testing upgrades is more important because you're relying on a tool to do a human's job. Don't want to blow my own trumpet too much but I wrote an article here: devproconnections.com/article/sql-server-2012/… that is an attempt to explain a lot of this stuff.Playroom
@Playroom - to add to your to suggestion, I have found that continuous integration by automating a build/deployment to a database (or databases) can go a long way toward finding deployment errors (not found in building specific post-deployment, uncompiled scripts). Also, when modifying an object, a developer can right-click, find all references to view any post-deployment scripts that may be affected.Knockwurst
Is there an example of this pre/post deploy script that works in idempotent manner?Etalon
V
3

I just wanted to say that this thread so far has been excellent.

I have been wrestling with the exact same concerns and am attempting to tackle this problem in our organization, on a fairly large legacy application. We've begun the process of moving toward SSDT (on a TFS branch) but are at the point where we really need to understand the deployment process, and managing custom migrations, and reference/lookup data, along the way.

To complicate things further, our application is one code-base but can be customized per 'customer', so we have about 190 databases we are dealing with, for this one project, not just 3 or so as is probably normal. We do deployments all the time and even setup new customers fairly often. We rely heavily on PowerShell now with old-school incremental release scripts (and associated scripts to create a new customer at that version). I plan to contribute once we figure this all out but please share whatever else you've learned. I do believe we will end up maintaining custom release scripts per version, but we'll see. The idea about maintaining each script within the project, and including a From and To SqlCmd variable is very interesting. If we did that, we would probably prune along the way, physically deleting the really old upgrade scripts once everybody was past that version.

BTW - Side note - On the topic of minimizing waste, we also just spent a bunch of time figuring out how to automate the enforcement of proper naming/data type conventions for columns, as well as automatic generation for all primary and foreign keys, based on naming conventions, as well as index and check constraints etc. The hardest part was dealing with the 'deviants' that didn't follow the rules. Maybe I'll share that too one day if anyone is interested, but for now, I need to pursue this deployment, migration, and reference data story heavily. Thanks again. It's like you guys were speaking exactly what was in my head and looking for this morning.

Vasques answered 17/5, 2013 at 12:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.