How to protect against deletion of a blob container?
Asked Answered
S

2

6

It's easy to both create and delete blob data. There are ways to protect from accidental data loss, ex:

  • Resource locks to protect against accidental storage account deletion
  • Azure RBAC to limit access to account/keys.
  • Soft delete to recover from accidental blob deletion.

This is already a good package, but it feels like there's a weak link. AFAIK, blob container lacks such safety as for account/blobs.

Considering that containers are a good unit to work with for blob enumeration and batch-deleting, that's bad.

How to protect against accidental/malicious container deletion and mitigate the risk of data loss?

What I've considered..

Idea 1: Sync copy of all data to another storage account - but this brings the synchronization complexity (incremental copy?) and notable cost increase.

Idea 2: Lock up the keys and force everyone to work on carefully scoped SAS keys, but that's a lot of hassle with dozens of SAS tokens and their renewals, + sometimes container deletion actually is required and authorized. Feels complex enough to break. I'd prefer a safety anyway.

Idea 3: Undo deletion somehow? According to Delete Container documentation, the container data is not gone immediately:

The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

Though, there is no information on when/how the storage account garbage collection works or if/how/for how long could the container data be recovered.

Any better options I've missed?

Santosantonica answered 3/9, 2018 at 15:59 Comment(0)
S
4

UPDATE:

This is similar to Blob-level protection and allows recovery from accidental deletion. Original answer below is still relevant as additional measures to take.


There is no single magic bullet .. Recap of what can be done:

Prevention measures

  • DO apply storage account level protections (resource locks).
  • DO limit account/container delete access to only callers who actually need it.
  • DO mark containers as Leased for infinity.

Use Managed Service Identity with RBAC when possible -or- delegate access with limited permissions using SAS (and Access Policies). This reduces the actors and scenarios where accidental/malicious deletion could happen in the first place.

Leases do not prevent malicious deletion but declares the "do not delete" intent more clearly and required extra step of removing the lease acts like additional layer of "Are you sure?"-question.

Recovery measures

AFAIK, no built-in recovery tools exists when entire container is already deleted.

  • DO implement periodic backup solution for disaster recovery.
  • CONSIDER contacting Azure support immediately, if you have no own backup.

Like with all backup solutions, do backup to locations of different security contexts and/or offline to avoid losing backups as well in the same incident. A few blob container backup implementation tips:

If you have no backup to restore from, then the container may still be recoverable by MS (if you are lucky and fast enough). According to Delete Container documentation the container data is not gone immediately:

The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

Santosantonica answered 8/1, 2020 at 8:18 Comment(0)
M
2

There is an alternative option you should consider using the Access policies offered for containers. You can use SAS for access and add an additional layer using Access policies which provides you with Container level policies. In there you can provide Access that does not include the delete option:

enter image description here This more for the preventive side

Rbac would also be a good way to secure access to containers.

When it comes to recovering from dataloss these are the official suggestions:

Block blobs. Create a point-in-time snapshot of each block blob. For more information, see Creating a Snapshot of a Blob. For each snapshot, you are only charged for the storage required to store the differences within the blob since the last snapshot state. The snapshots are dependent on the existence of the original blob they are based on, so a copy operation to another blob or even another storage account is advisable. This ensures that backup data is properly protected against accidental deletion. You can use AzCopy or Azure PowerShell to copy the blobs to another storage account.

Files. Use share snapshots, or use AzCopy or PowerShell to copy your files to another storage account.

Tables. Use AzCopy to export the table data into another storage account in another region. More can be found here

Mojgan answered 4/9, 2018 at 18:0 Comment(2)
So, basically idea 2 from OP - RBAC + SAS? Stored Access Policy is only to provide SAS permission template and revocation, but it does not prevent accidental container deletion nor help you recover, right? AFAIK snapshots are stored in-container and do not protect against >container< deletion in any way.Insinuating
yep , it would prevent the deletion if customers don't have the right to delete the container, mainly by providing them read, add m create, write and List access. I'd recommend doing some testing around it, since AP is not identical to SAS, but provides addition functionality(at the container level).. But recovery , your only best bet is having snapshots, which if a blob with a snapshot is being deleted, it will throw an error to delete the existing snapshots first. I'd recommend posting to feedback.azure.com/forums/217298-storage I will also discuss it with my internal team.Mojgan

© 2022 - 2024 — McMap. All rights reserved.