glusterfs volume creation failed - brick is already part of volume
Asked Answered
B

3

7

In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool.

Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume.

sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0

volume create: VolName: failed: /mnt/ppshare/brick0 is already part of a volume

But no volumes are present in this instance.

I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether.

It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully.

I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck. https://linuxsysadm.wordpress.com/2013/05/16/glusterfs-remove-extended-attributes-to-completely-remove-bricks/.

apt-get install attr cd /glusterfs for i in attr -lq .; do setfattr -x trusted.$i .; done attr -lq /glusterfs (for testing, the output should pe empty)

Broadminded answered 12/9, 2016 at 8:52 Comment(0)
S
26

Simply put "force" in the end of "gluster volume create ..." command.

Strage answered 26/1, 2017 at 21:17 Comment(1)
While ultimately, in my case, I did need to do this (because following an example, my volume is located off of /), this is not a really helpful answer because it does not explain why one should use force, and therefore, does not help educate someone as to why the OP's problem was encountered to begin with. The answer by @KirályIstván explains why the error occured.Toein
T
6

Please check if you have directories /mnt/ppshare/brick0 created.

You should have /mnt/ppshare without the brick0 folder. The create command creates those folders. The error indicates that the brick0 folders are present.

Tetratomic answered 10/1, 2017 at 17:58 Comment(1)
You are right! The Gluster documentation incorrectly tells you to create the "brick" directory beforehand.Crosspatch
P
0

When using the 'gluster volume replace-brick' or the 'gluster volume reset-brick' commands, if the brick has been used in a volume before, or even just specified in a 'replace-brick' or 'reset-brick' command, the volume will be recorded in the .glusterfs directory on the brick. To work around this, delete the .glusterfs directory, or delete and recreate the brick directory. For the example given,

sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0

delete and recreate the brick directory:

[root@ip1 ~]# rm -rf /mnt/ppshare/brick0
[root@ip1 ~]# mkdir /mnt/ppshare/brick0

Then try the command again.

Periodicity answered 26/3 at 20:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.