In SQL Server data Tools you have the deployment option "Block incremental deployment if data loss might occur", which I'd wager is a best practice to keep checked.
Lets say we have a table foo, and a column bar which is now redundant - has no dependencies, foreign keys etc etc, and we have already removed references to this column in our data layer and stored procedures as it's simply not used. In other words, we are satisfied that dropping this column will have no adverse effects.
There are a couple of flies in the ointment:
- The column has data in it
- The database is published to hundreds of distributed clients, and it could take months for the change to ripple out to all clients
As the column is populated, publishing will fail unless we change the "Block incremental deployment if data loss might occur" option. This option is at the database level, not table level however, and so due to the distributed nature of the clients, we'd have to turn off the "data loss" option for months before all databases were updated, and turn it back on once all clients have updated (our databases have version numbers set by our build).
You may think we could solve this with a pre-deployment script such as
if exists (select * from information_schema.columns where table_name = 'foo' and column_name = 'bar') BEGIN
alter table foo drop constraint DF_foo_bar
alter table foo drop column bar
END
But again this fails unless we turn the "data loss could occur" option off.
I'm simply interested as to what others have done in this scenario as I'd like to have granularity which doesn't currently seem possible.