Attaching and mounting existing EBS volume to EC2 instance filesystem issue
Asked Answered
C

12

66

I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore. Therefore I'm attempting to create a new EBS volume from a snapshot of the old volume and mount it into the new instance. Here is exactly what I did:

  1. Created a new volume from snapshot of the old one.
  2. Created a new EC2 instance and attached the volume to it as /dev/xvdf (or /dev/sdf)
  3. SSHed into the instance and attempted to mount the old volume with:

$ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol

And the output was:

mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type

I know I should specify the filesytem as ext4 but the volume contains a lot of important data, so I cannot afford to format it with $ sudo mkfs -t ext4 /dev/xvdf. If I try sudo mount /dev/xvdf /vol -t ext4 (no formatting) I get:

mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

And dmesg | tail gives me:

[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem

By the way, the 'mounting read-only' message also worries me but I haven't look into it yet since I can't mount the volume at all.

Coverley answered 1/3, 2015 at 10:4 Comment(2)
I updated my answer. Does that work for you?Donohue
A simpler solution is to extend your volume. Posted answer here https://mcmap.net/q/294801/-attaching-and-mounting-existing-ebs-volume-to-ec2-instance-filesystem-issueAstomatous
D
123

The One Liner


🥇 Mount the partition (if disk is partitioned):

sudo mount /dev/xvdf1 /vol -t ext4

Mount the disk (if not partitioned):

sudo mount /dev/xvdf /vol -t ext4

where:

  • /dev/xvdf is changed to the EBS Volume device being mounted
  • /vol is changed to the folder you want to mount to.
  • ext4 is the filesystem type of the volume being mounted

Common Mistakes How To:


✳️ Attached Devices List

Check your mount command for the correct EBS Volume device name and filesystem type. The following will list them all:

sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL

If your EBS Volume displays with an attached partition, mount the partition; not the disk.


✳️ If your volume isn't listed

If it doesn't show, you didn't Attach your EBS Volume in AWS web-console


✳️ Auto Remounting on Reboot

These devices become unmounted again if the EC2 Instance ever reboots.

A way to make them mount again upon startup is to add the volume to the server's /etc/fstab file.

🔥 Caution:🔥
If you corrupt the /etc/fstab file, it will make your system unbootable. Read AWS's short article so you know to check that you did it correctly.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot

First:
With the lsblk command above, find your volume's UUID & FSTYPE.

Second:
Keep a copy of your original fstab file.

sudo cp /etc/fstab /etc/fstab.original

Third:
Add a line for the volume in sudo nano /etc/fstab.

The fields of fstab are 'tab-separated' and each line has the following fields:

<UUID>  <MOUNTPOINT>    <FSTYPE>    defaults,discard,nofail 0   0

Here's an example to help you, my own fstab reads as follows:

LABEL=cloudimg-rootfs   /   ext4    defaults,discard,nofail 0   0
UUID=e4a4b1df-cf4a-469b-af45-89beceea5df7   /var/www-data   ext4    defaults,discard,nofail 0   0

That's it, you're done. Check for errors in your work by running:

sudo mount --all --verbose

You will see something like this if things are 👍:

/                   : ignored
/var/www-data       : already mounted
Donohue answered 1/3, 2015 at 10:10 Comment(9)
Sorry, forgot to add that. I'll edit the question do you may see the outputs.Coverley
The mount command I gave in my answer does not overwrite your Volume filesystem so you don't need to be worried about losing data. Also, rest assured that if things go wrong.. you can just re-create a new copy of your Volume from the original EBS snapshot like you did to create the volume in the first place. Do my mount command, it is non-destructive. I promise. And if you do recreate a new volume, make sure it is not a read-only replica.Donohue
I had already tried that, just forgot to put it in the question. Thanks for the tip on the 'read-only' problem.Coverley
I read your update. Use this to check the filesystem type and update the mount command with what it tells you for TYPE="" Use: blkid /dev/xvdfDonohue
Solved it! Thank you very much for the help. After getting no output at all from the command you mentioned, I noticed that for some reason the volume was located at /dev/xvdf1, not /dev/xvdf. Simple mistake, sorry. Using sudo mount /dev/xvdf1 /vol -t ext4 worked like a charm.Coverley
Thanks for reporting back! Super happy you got this figured out. Rock on.Donohue
@GabrielRebello Thanks for your comment, that saved me. I'll make it an answer, just to give it more visabilityRapprochement
You should be mounting drives in /etc/fstab not by running a mount command in the rc.localAcupuncture
mount: wrong fs type, bad option, bad superblock on /dev/xvdf1, missing codepage or helper program, or other errorList
D
26

I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run

  sudo fdisk -l 

You'll' have an output that appears like the one shown below detailing information about your disks (volumes"

 Disk /dev/xvda: 12.9 GB, 12884901888 bytes
  255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    25157789    12570862+  83  Linux

 Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
 255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0x00000000

 Disk /dev/xvdf doesn't contain a valid partition table

As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands

 sudo mkfs -t ext4 /dev/xvdf

Making a new file system clears everything in the volume so do this on a fresh volume without important data

Then mount it maybe in a directory under the /mnt folder

 sudo mount /dev/xvdf /mnt/dir/

Confirm that you have mounted the volume to the instance by running

  df -h

This is what you should have

Filesystem      Size  Used Avail Use% Mounted on
 udev            486M   12K  486M   1% /dev
 tmpfs           100M  400K   99M   1% /run
 /dev/xvda1       12G  5.5G  5.7G  50% /
 none            4.0K     0  4.0K   0% /sys/fs/cgroup
 none            5.0M     0  5.0M   0% /run/lock
 none            497M     0  497M   0% /run/shm
 none            100M     0  100M   0% /run/user
 /dev/xvdf        16G   44M   15G   1% /mnt/ebs

And that's it you have the volume for use there attached to your existing instance. credit

Delative answered 4/1, 2016 at 9:17 Comment(0)
R
25

I noticed that for some reason the volume was located at /dev/xvdf1, not /dev/xvdf.

Using

sudo mount /dev/xvdf1 /vol -t ext4

worked like a charm

Rapprochement answered 11/6, 2015 at 9:42 Comment(1)
xvdf is the disk, xvdf1 is the partition. If the disk is already partitioned like this, need to mount the partition itself.Rigging
M
13

I encountered this problem, and I got it now,

[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part

You should mount the partition

/dev/xvdf1 (which type is a partition)

not mount the disk

/dev/xvdf (which type is a disk)

Mayman answered 24/1, 2016 at 13:23 Comment(3)
This solved my problem. That was such a simple sense! Thanks!Haslet
So easy... Thanks!Kolva
This worked for me, but how is that, the accepted answer and all other sources mention mounting the xvdf instead of xvdf1?Zoraidazorana
A
6

For me it was duplicate UUID error while mounting the volume, so I used "-o nouuid" option.

for e.g. mount -o nouuid /dev/xvdf1 /mnt

I found the clue from system logs, on CentOs, /var/log/messages and found the error: kernel: XFS (xvdf1): Filesystem has duplicate UUID f41e390f-835b-4223-a9bb-9b45984ddf8d - can't mount

Agan answered 19/7, 2020 at 13:11 Comment(0)
B
5

I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.

Berkley answered 12/3, 2018 at 16:37 Comment(2)
+1 I was following the steps here: docs.aws.amazon.com/AWSEC2/latest/UserGuide/… and it instructs you to use the same AMI and instance type but that seems to result in the same Disk Identifier. Switching to a different AMI fixed it for me too. sudo fdisk -l will display the disk idForceful
This happened to me as well. When I created another instance using the image of the instance of which I had lost the key, it created the volume with same UUID. So I created a new instance without using any image and then it mounted correctlyMalinowski
V
2

First run below command

lsblk /dev/xvdf

Output will be something like below

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvdf 202:80 0 10G 0 disk

├─xvdf1 202:81 0 1M 0 part

└─xvdf2 202:82 0 10G 0 part

Then, check the size and then mount it that one. In above cases, mount it like below

mount /dev/xvdf2 /foldername

Viral answered 15/5, 2020 at 21:55 Comment(0)
P
1

First check file system type with "lsblk -f" command, in my case it is "XFS"

#lsblk -f
NAME    FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINT
xvda
├─xvda1
├─xvda2 vfat   FAT16 EFI   31C3-C85B                              17.1M    14% /boot/efi
└─xvda3 xfs          ROOT  6f6ccaeb-068f-4eb7-9228-afeb8e4d25df    7.6G    24% /
xvdf
├─xvdf1
├─xvdf2 vfat   FAT16 EFI   31C3-C85B
└─xvdf3 xfs          ROOT  6f6ccaeb-068f-4eb7-9228-afeb8e4d25df    5.4G    46% /mnt/da

modify your command according to the file system type.

mount -t xfs -o nouuid /dev/xvdf3 /mnt/data/
Philender answered 6/6, 2022 at 8:39 Comment(0)
F
0

You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.

[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk 
└─xvda1 202:1    0   8G  0 part /
xvdb    202:16   0  10G  0 disk /home/ec2-user/vol1
Floc answered 12/6, 2017 at 17:23 Comment(0)
D
0

I usually persist by pre-defining the UUID at the time of creating ext4 FS,I add a script on user data and launch the instance, works just fine without any issues:

Ex script:

#!/bin/bash
# Create the directory to be mounted
sudo mkdir -p /data
# Create file system with pre-defined & Label (edit the device name as needed)
sudo mkfs -U aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa -L DATA -t ext4 /dev/nvme1n1 

# Mount
sudo mount /dev/nvme1n1 /data -t ext4

# Update the fstab to persist after reboot
sudo su -c "echo 'UUID=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa   /data  ext4    defaults,discard,nofail 0   0' >> /etc/fstab"
Demantoid answered 20/1, 2020 at 18:32 Comment(0)
Y
0

For me there was some mysterious file causing this issue.

For me I had to clear the directory using the following command.

sudo mkfs -t ext3 /dev/sdf

Warning: this might delete files you have saved. So you can run ls to make sure you don't lose important saved files

Yell answered 10/4, 2020 at 18:7 Comment(0)
A
0

Some very good and helpful answers here but if you want to find the simplest solution, extend the volume of existing instance as it is supported in all new Ec2 instances https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/

The doc says that after you increase the volume you will still have to expand it https://docs.aws.amazon.com/ebs/latest/userguide/recognize-expanded-volume-linux.html

I had to reboot my machine and I could see the additional 30GB I added

enter image description here

Astomatous answered 19/3 at 16:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.