Removing disk from zfs pool permanently
Asked Answered
T

2

11

I tried to add a SSD to zpool by ZIL but I did a mistake.

I expected.. zpool add zones log c0t1d0
But I did... zpool add zones c0t1d0

I tried to execute zone remove, detach, offline commands, but it failed.
How can I remove a SSD without data loss in zpool?

$ zpool status
pool: zones
state: ONLINE
scan : non requested
config:
   NAME          STATE     READ  WRITE CKSUM
   zones          ONLINE      0      0      0
       c0t0d0     ONLINE      0      0      0
       c0t1d0     ONLINE      0      0      0


$ zpool iostat -v               
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zones        280G   109G     40    139  1.28M  13.7M  
     c0t0d0  263G  15.3G     39     35  1.25M  2.61M  
     c0t1d0 17.6G  93.4G      0    104  20.9K  11.1M
----------  -----  -----  -----  -----  -----  -----
Therontheropod answered 21/11, 2016 at 7:2 Comment(2)
Just an extra suggestion from the ignorant! Dan, wouldn't it be a fairly simple addition to run another utility after the drive was removed from the pool, that copied the files (again) to a new place in the pool and erases the indirect mapping at the same time? - Yes, copying twice but it would be the 'logical equivalent' of copying data out of the drive, removing the drive, and then copying data back into to the pool again, however done automatically rather than manually and a practical solution for loaded-up drives as well. -Only a thought!Vitriolic
@Vitriolic I think you meant to post this under my answer below :). Unfortunately it’s not that simple, because ZFS would also have to walk the entire pool metadata tree and rewrite all the places that pointed to the old data (in snapshots, dedup table, etc). With the indirect mappings ZFS sees the device listed in a given block pointer is missing and consults the mapping, which is much easier to implement.Fenestrated
O
3

Unfortunately, you will have to destroy and create anew. You can use zfs send/recv to migrate all data with preservation of all snapshots, metadata etc. and also faster copying than with normal cp.

Oratorio answered 22/11, 2016 at 9:41 Comment(0)
F
13

UPDATE: the ZoL 0.8.0 release will contain this feature.

@user121391 is correct that today this is not possible except through a send/recv storage migration.

However, there is a new feature developed by Matt Ahrens (head of the OpenZFS community) that is close to landing on the master branch and will eventually make its way to other platforms (FreeBSD / Linux / macOS / etc.). Here is a link to the pull request on Github.

Once it integrates, you will be able to run zpool remove on any top-level vdev, which will migrate its storage to a different device in the pool and add indirect mappings from the old location to the new one. It's not great if the vdev you're removing is already very full of data (because then accesses to any of that data have to go through the indirect mappings), but it is designed to work great for the use case you're talking about (misconfiguration that you noticed very quickly).

Fenestrated answered 18/4, 2017 at 5:22 Comment(1)
And it looks like it's about to land in Linux in the 0.8.0 release (github.com/zfsonlinux/zfs/pull/6900).Mightily
O
3

Unfortunately, you will have to destroy and create anew. You can use zfs send/recv to migrate all data with preservation of all snapshots, metadata etc. and also faster copying than with normal cp.

Oratorio answered 22/11, 2016 at 9:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.