How to destroy a ZFS pool while the pool is busy?
Asked Answered
zfs
C

5

5

I have zfs pool:

$ sudo zpool status lxd
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME                    STATE     READ WRITE CKSUM
    lxd                     ONLINE       0     0     0
      /var/lib/lxd/zfs.img  ONLINE       0     0     0

I've tried:

$ sudo zpool destroy -f lxd 
cannot destroy 'lxd': pool is busy

How can I unmount zpool img

Cadel answered 16/11, 2016 at 7:28 Comment(1)
Did you try umount --lazy or --force?Philemol
U
5

I would try these things in this order:

  • Stop all read/write IO from programs on the pool and its file systems (check with zpool iostat for current read/write)
  • Stop all sharing services (SMB, NFS, AFP, iSCSI) that use this pool or remove the sharing properties on the file systems
  • Unmount (zfs unmount) all file systems of the pool
Unamerican answered 16/11, 2016 at 8:12 Comment(2)
Yep, what me help is rm .img with sudo, unmount file system, kill all zfs process and I have to reboot the machineCadel
I had a similar problem with the pool being "busy" when I tried to export it. However, after I quit all tmux sessions of all users on the server, the export succeeded. Not always feasible, but at least one more thing to try.Mozzetta
M
5

If your present working directory (pwd) is within the pool's mounted directory structure, this can cause the pool to be busy.

Trying changing your current directory, for example to your home directory: cd ~

Mesdemoiselles answered 31/10, 2021 at 22:26 Comment(2)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Enfeeble
This answer was quite clear for me and made me do a facepalm as I realized I was trying to delete the pool while I was inside of one of its directories. D'oh! Changed to the root directory and ran the command again, and it worked perfect.Tetrachloride
P
2

There are quite a few things in Linux which can keep a ZFS pool busy, blocking export and destroy commands. As root check these (assuming POOLNAME=<yourpool>):

  1. Filesystem references like open files, current directories etc.:
# lsof 2>/dev/null | grep $POOLNAME
  1. Mounted pool filesystems and zvols:
# mount | grep $POOLNAME
  1. Active swap devices (any /dev/zd...?):
# swapon -s
  1. Device-mapper devices referencing /dev/zvol/<poolname>/.../<volume> block devices. These might have been set up by LVM although it shouldn't pick up /dev/zd.* which is usually excluded in /etc/lvm/lvm.conf - global_filter=....
# dmsetup deps

Remove these references by stopping or killing the processes by their PID, umount zvols, swapoff the zvols or swapfiles, vgexport the involved pvs and dmsetup remove the dm devices.

At least for (3.) you need the /dev/zd... device paths belonging to your zpool, so build a regex for grep:

POOLDEVS=$(                    
    find "/dev/zvol/$POOLNAME" -type l -print0 |
    xargs -0 realpath -zP | paste -szd '|'
    echo $POOLNAME
)

$POOLDEVS is a regex like /dev/zd6016|/dev/zd4720p2|..., so try:

(mount; swapon; pvs) | grep -E "$POOLDEVS"

For (4.) find device-mapper/LVM references to $POOLNAME's zvol block devices. Hexadecimal major/minor block device numbers from stat must to be converted to (major, minor) decimal numbers as output by dmsetup:

dmsetup deps | grep -Ff <(
    cd /dev/zvol/$POOLNAME &&
    find . -type l -printf "%l\0" |
        xargs -0 stat -c 'printf "(%%d, %%d)\n" 0x%t 0x%T' |
        sh
)

Output like:

yoda-swap: 1 dependencies   : (230, 1409)
yoda-root: 1 dependencies   : (230, 1409)

Release them with e. g. dmsetup remove yoda-swap yoda-root which succeeds if they are inactive.

Pemmican answered 9/11, 2022 at 11:47 Comment(0)
S
0

I'm not clear from you comment whether the suggestions in the previous answer worked for you. Since you haven't accepted the answer I'm asuming it didn't work for you so I'll give you another suggestion to try.

Boot up with a zfs-enabled usb/cd/external hd,and destroy your pool from there. It could be difficult to stop all the services depending on that mount from you system as configured and this would get around that in a clean fashion.

Schlieren answered 12/1, 2017 at 6:48 Comment(0)
D
0

In your particular case - you would need to delete the vm in LXD & then delete the pool


I was experimenting with a zvol & qcow2 file in a dataset on a SSD stripe in libvirt & managed to destroy a dataset that was being used as a storage pool by libvirt

  • The file lock from libvirt left the zfs filesystem in an inconsistent state (which was always detected as busy)

Even rebooting after deleting the storage pool in virt-manager showed zfs in a still busy state

  • The only thing that worked was deleting every partition with fdisk (which removed the zfs metadata signature from the disks) & rebooting.
  • This also automatically removed the pool from /etc/zfs/zpool.cache
Delindadelineate answered 16/10 at 22:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.