Windows Spanned Disks (LDM) restoration with Linux?
Asked Answered
D

3

14

Is it possible to read Windows 2008 LDM partitions in Linux?

We have five 512GB LUNS exported through ISCSI to a dead Windows 2008 and this box doesn't want them anymore. Windows believes they are now raw devices... So I'd like to read the partitions with Linux. I am using the latest Ubuntu to try to save at least some of the data. The problem is that all documentation I've found so far seems to be obsolete (often talking about w2k or XP Logical Disk Manager (LDM). But I think now it's different with 2008.

Testdisk [0] gives me the following output

testdisk /list LUN01
TestDisk 6.11, Data Recovery Utility, April 2009
Christophe GRENIER <[email protected]>
http://www.cgsecurity.org
Please wait...
Disk LUN01 - 536 GB / 500 GiB - CHS 65271 255 63, sector size=512

Disk LUN01 - 536 GB / 500 GiB - CHS 65271 255 63
     Partition                  Start        End    Size in sectors
 1 P MS LDM MetaData               34       2081       2048 [LDM metadata partition]
No FAT, NTFS, EXT2, JFS, Reiser, cramfs or XFS marker
 2 P MS Reserved                 2082     262177     260096 [Microsoft reserved partition]
 2 P MS Reserved                 2082     262177     260096 [Microsoft reserved partition]
 3 P MS LDM Data               262178 1048576966 1048314789 [LDM data partition]

Note: Each of the 5 LUN has the same partition table.

In many documentations like cgssecurity and kernel.org, they talk about ldminfo which doesn't return any useful information. I suspect that it's now obsolete, just because it was very hard to find :) And because it does not work I guess windows 2008 uses a different format.

# ldminfo LUN01
Something went wrong, skipping device 'LUN01'
# losetup /dev/loop1 LUN01
# losetup -a
/dev/loop1: [fd00]:14 (/mnt/LUN01)
# ldminfo /dev/loop1 
Something went wrong, skipping device '/dev/loop1'

Then, I tried to concat them with dmsetup but again no luck. That's how I used dmsetup :

# losetup /dev/loop1 LUN01
# losetup /dev/loop2 LUN02
# losetup /dev/loop3 LUN03
# losetup /dev/loop4 LUN04
# losetup /dev/loop5 LUN05
# blockdev --getsize /dev/loop1
1048577000
# cat > w2008.mapping
# Offset into   Size of this    Raid type       Device          Start sector
# volume        device                                          of device
0               1048577000  linear          /dev/loop1       0
1048577000      1048577000  linear          /dev/loop2       0
2097154000      1048577000  linear          /dev/loop3       0
3145731000      1048577000  linear          /dev/loop4       0
4194308000      1048577000  linear          /dev/loop5       0
# dmsetup create myfs w2008.mapping
# mount -t ntfs /dev/mapper/myfs /mnt/final
NTFS signature is missing.
Failed to mount '/dev/loop1': Invalid argument
The device '/dev/loop1' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
# echo Poo.

So still no NTFS filesystem :)

Does anyone have any ideas about how I can extract the data from there or give me some pointers?

Diaphoretic answered 8/12, 2011 at 7:2 Comment(0)
D
12

Allright, I will reply to my own question to avoid the same pain to others.

0. WARNING

In case you are doing a recovery, ALWAYS COPY YOUR DATA and work on the copy. Do NOT alter the original 'broken' data. That thing said, keep reading.

1. Your partition looks like ...

Install sleuth kit and testdisk. Hopefully there will packages for your distro :)

# mmls -t gpt LUN01
GUID Partition Table (EFI)
Offset Sector: 0
Units are in 512-byte sectors

    Slot    Start        End          Length       Description
00:  Meta    0000000000   0000000000   0000000001   Safety Table
01:  -----   0000000000   0000000033   0000000034   Unallocated
02:  Meta    0000000001   0000000001   0000000001   GPT Header
03:  Meta    0000000002   0000000033   0000000032   Partition Table
04:  00      0000000034   0000002081   0000002048   LDM metadata partition
05:  01      0000002082   0000262177   0000260096   Microsoft reserved partition
06:  02      0000262178   1048576966   1048314789   LDM data partition
07:  -----   1048576967   1048576999   0000000033   Unallocated

Note: testdisk will give you the same info with less details # testdisk /list LUN01

2. Extract disks metadata

All information about the disk order, data size and other ciphered attributes about the partition will be found in the LDM metadata partition. W2k8 has not changed so much since this document [2] albeit some sizes are different and some attributes are new (and obviously unknown)...

# dd if=LUN01 skip=33 count=2048 |xxd -a > lun01.metadata
# less lun01.metadata 

At line 0002410 you should see the name of the server. Reassuring ? But we are after the disks order and disk ID. Scroll down.

2.1. Disks Order

At line 0003210 you should see 'Disk1' followed by a long string.

0003200: 5642 4c4b 0000 001c 0000 0006 0000 0001  VBLK............
0003210: 0000 0034 0000 003a 0102 0544 6973 6b31  ...4...:...Disk1
0003220: 2437 3965 3830 3239 332d 3665 6231 2d31  $79e80293-6eb1-1
0003230: 3164 662d 3838 6463 2d30 3032 3662 3938  1df-88dc-0026b98
0003240: 3335 6462 3300 0000 0040 0000 0000 0000  35db3....@......
0003250: 0048 0000 0000 0000 0000 0000 0000 0000  .H..............

This means that the first disk of this Volume is identfied by the following Unique ID (UID) : 79e80293-6eb1-11df-88dc-0026b9835db3 But at the moment, we don't know which of the disk has this UID ! So move to the Disk2 entry and take note of its UID and so on for all the disks you had in your volume. Note: Based on my experience only the first 8 characters are changing, the rest stays the same. Indeed, W2k8 seems to increment the ID by 6. $ is a separator.

Eg. :

Windows Disk1 UID : 79e80293-6eb1-11df-88dc-0026b9835db3
Windows Disk2 UID : 79e80299-...
Windows Disk3 UID : 79e8029f-...

2.2. Find Disk UID

Go to line 00e8200 (lun01.metadata). You should find 'PRIVHEAD'.

00e8200: 5052 4956 4845 4144 0000 2c41 0002 000c  PRIVHEAD..,A....
00e8210: 01cc 6d37 2a3f c84e 0000 0000 0000 0007  ..m7*?.N........
00e8220: 0000 0000 0000 07ff 0000 0000 0000 0740  ...............@
00e8230: 3739 6538 3032 3939 2d36 6562 312d 3131  79e80299-6eb1-11
00e8240: 6466 2d38 3864 632d 3030 3236 6239 3833  df-88dc-0026b983
00e8250: 3564 6233 0000 0000 0000 0000 0000 0000  5db3............
00e8260: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00e8270: 3162 3737 6461 3230 2d63 3731 372d 3131  1b77da20-c717-11
00e8280: 6430 2d61 3562 652d 3030 6130 6339 3164  d0-a5be-00a0c91d
00e8290: 6237 3363 0000 0000 0000 0000 0000 0000  b73c............
00e82a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00e82b0: 3839 3164 3065 3866 2d64 3932 392d 3131  891d0e8f-d929-11
00e82c0: 6530 2d61 3861 372d 3030 3236 6239 3833  e0-a8a7-0026b983
00e82d0: 3564 6235 0000 0000 0000 0000 0000 0000  5db5............
00e82e0: 0000 0000 0000 0000 0000 0000 0000 0000  ................

What we are after is the disk UID of this particular disk. We see: - Disk Id : 79e80299-6eb1-11df-88dc-0026b9835db3 - Host Id : 1b77da20-c717-11d0-a5be-00a0c91db73c - Disk Group Id : 891d0e8f-d929-11e0-a8a7-0026b9835db5

So this disk with the UID 79e80299-... is Windows Disk2 but for us it was Physical Disk 1. Indeed find this UID in the disk order you found above. Note: There is no logical order. I mean Windows decide how to setup the disk order not you. So there is NO human logic and don't expect your first disk to be Disk1.

So don't assume that the order above is going to follow any human logic. I recommend you to go through all the LDM data of your disks and extract their UID. (You can use the following command to just extract the PRIVHEAD info: dd if=LUNXX skip=1890 count=1 |xxd -a)

e.g:

(Windows) Disk1 : 79e80293-... == Physical disk 2
(Windows) Disk2 : 79e80299-... == Physical disk 1
(Windows) Disk3 : 79e8029f-... == Physical disk 3

I am sure that somewhere in the LDM metadata you can find the type of Volume (spanned, RAID0, RAIDX, and the associated stripe sizes) However, I haven't dug it. I used a 'try and retry' method to find my data. So if you know how you setup your configuration before the drama, you will save yourself a lot of time.

3. Find the NTFS filesystem and your data

Now we are interested in the big chunk of data we want to restore. In my case it's ~512GB of data so we won't convert the whole in ASCII. I haven't really search how Windows find the beginning of its NTFS partition. But what I found is that it logically starts with the following keyword : R.NTFS. Let's find this and find the offset we will have to apply later to see our NTFS FS.

06:  02      0000262178   1048576966   1048314789   LDM data partition

In this example, the data starts at 262178 and is 1048314789 sectors long

We found above that Disk1 (of the volume group) is actually the 2nd physical disk. We will extract some of its information to find where the NTFS partition start.

# dd if=LUN02 skip=262178 count=4096 |xxd -a > lun02.DATASTART-4k
# less lun02.DATASTART-4k

0000000: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
00fbc00: eb52 904e 5446 5320 2020 2000 0208 0000  .R.NTFS    .....
00fbc10: 0000 0000 00f8 0000 3f00 ff00 0008 0400  ........?.......
00fbc20: 0000 0000 8000 8000 ffaf d770 0200 0000  ...........p....

Here we can see that NTFS starts at 00fbc00. So knowing that we can start to extract our data from sector 262178 + 00fbc00 bytes. Let's do a bit of hexadecimal to decimal conversion with bytes to sector conversion as well.

0xfbc00 bytes = 1031168 bytes = 1031168/512 sectors = 2014 sectors

So our NTFS partition starts at 262178 + 2014 = 264192 sectors. This value is going to be an offset we will use later on all disks. Let's called it the NTFS offset. Obviously the total size is shrinked by the offset. So the new size is: 1048314789 - 2014 = 1048312775 sectors

4. Try to mount/see the data

From now on, either it will work out of the box because your NTFS partition is healthy or it won't because you're doing this to recover some data. The following process is the same whatever is your status. All the following is based on [1] (see Links at the bottom)

A spanned volume, will fill a volume after another. Where as a striped (RAID0) will copy chunk of data over many disks (a.k.a a file is spread across many disks). In my case, I didn't know if it was a spanned or striped volume. The easiest way to know, if your volume is not full is to check if you have a lot of zeroes at then end of all your volumes. If that's the case then it's striped. Because if it's spanned, if will fill the first disk, then the second. I am not 100% sure of that but that's what I observed. So dd a bunch of sectors from the end of the LDM data partition.

4.0 Preparations to access your data

First mount your dd file or your device through a loopback device with the NTFS offset and the size we calculated above. However the offset and size must be in bytes not in sectors to be used with losetup. offset = 264192*512 = 135266304 size = 1048312775*512 = 536736140800

# losetup /dev/loop2 DDFILE_OR_DEVICE -o 135266304 --size 536736140800
# blockdev --getsize /dev/loop2
1048312775 <---- total size in sectors, same number than before

Note: you can add '-r' to mount in Read-Only mode.

Do the above for all the physical disks part of your volume. Display the result with: losetup -a Note: If you don't have enough loop devices you can easily create more with : # mknod -m0660 /dev/loopNUMBER b 7 NUMBER && chown root.disk /dev/loopNUMBER

Check your alignment by opening the first Disk of the group (eg: Disk2) to see if the first line is R.NTFS. If not then your alignment is wrong. Verify your calculations above and try again. Or you are not looking at the 1st Windows Disk

e.g:

First disk of the volume has been mounted on /dev/loop2 
# xxd /dev/loop2 |head
0000000: eb52 904e 5446 5320 2020 2000 0208 0000  .R.NTFS    ..... 
0000010: 0000 0000 00f8 0000 3f00 ff00 0008 0400  ........?.......

All good. Let's move to the annoying part :)

4.1 Spanned

Spanned disks are actually a chain of disks. You fill the first then you use the second one and so and so forth. Create a file which look like this, eg :

# Offset into   Size of this    Raid type       Device          Start sector
# volume        device                                          of device
0               1048312775  linear          /dev/loop2       0
1048312775      1048312775  linear          /dev/loop1       0
2096625550      1048312775  linear          /dev/loop3       0

Notes: - Remember to use the good disk order (you found before). eg: Physical Disk2 followed by Physical Disk1 and Physical Disk3 - 2096625550 = 2*1048312775 and obviously if you have a fourth disk it's gonna be 3 times the size for the offset for the 4th disk.

4.2 Striped

The problem with striped mode (aka RAID0) is you must know what is your stripe size. Apparently by default it is 64k (in my case it was 128k but I dunno if it was tuned by the Windows sysadmin:). Anyway if you don't know it, you just have to try all the possible standard values and see which one gives you a possible viable NTFS filesystem.

Create a file like the following for 3 disks with a 128k chunk size

                       .---+--> 3 chunks of 128k
0 3144938240  striped  3  128      /dev/loop2 0 /dev/loop3 0 /dev/loop1 0 
   `---> total size of the volume      `----------+-----------+---> disk order

/!\ : Size of the volume is not exactly the size we calculated before. dmsetup needs a volume size divisible by the chunk size (aka stripe size) AND by the number of disks in the volume. So in our case. We have 3 disks of 1048312775 sectors So the 'normal' size is 1048312775*3=3144938325 sectors but due to the above contraint we will recalculate the size and round it # echo "3144938325/128*128" | bc 3144938240 sectors

  So 3144938240 is the size of your volume in a striped scenario with 3 disk and
  128 chunks (aka stripes)

4.3 Mount it.

Now lets aggregate everything together with dmsetup :

# dmsetup create myldm /path/myconfigfile
# dmsetup ls
myldm       (253, 1)

# mount -t ntfs -o ro /dev/mapper/myldm /mnt 

If it does not mount. Then you can use testdisk :

# testdisk /dev/mapper/myldm
--> Analyse
----> Quick search
------> You should see the volume name (if any). If not it seems compromised :)
--------> Press 'P' to see files and copy with 'c'

5. Conclusion

The above worked for me. Your mileage may vary. And there is maybe a better and easier way to do it. If so, share it so nobody else will have to go through this hassle :) Also, it may look hard but it is not. As long as you copy your data somewhere, just try and retry until you can see something. It took me 3 days to understand how to put all the bits together. Hopefully the above will help you to not waste 3 days.

Note: All examples above have been made up. There is maybe some inconsistencies between the examples despite my thoroughness ;)

Good luck.

6. Links

Diaphoretic answered 19/12, 2011 at 4:49 Comment(3)
This answer is still relevant even in 2021. Ldmtool was unable to detect my striped device which had a corrupted metadata partition. Following this answer I was able to recover 1.5TB of data which included some old meaningful data that I had not yet backed up. Thank you so much for the effort and detail you put into this answer!Dishonesty
Glad to see that it helps people, even now! :)Diaphoretic
Oof, just realised, it was 10 years ago! Time flies!Diaphoretic
S
35

Here's the (much easier) answer, now that ldmtool exists.

ldmtool reads LDM (aka Windows Dynamic Disks) metadata, and (among other things) creates device-mapper entries for the corresponding drives, partitions, and RAID arrays, allowing you afterwards to access and mount them just like other block devices in Linux.

The program does have a few limitations, mostly borne from the fact that it does not modify LDM metadata at all. So you cannot create LDM disks in Linux (use Windows for that), and you should not mount in read-write mode RAID volumes that have disks missing. (ldmtool won't modify the metadata to reflect that this happened, and the next time Windows assembles the RAID array, problems will ensue, as not all the drives will be in sync.)

Here are the steps to follow:

  1. To install ldmtool on Debian and Ubuntu systems, type apt-get install ldmtool. It should be similarly easy on most other recent Linux distributions.
  2. Run ldmtool create all.
  3. You should now have a bunch of new entries in /dev/mapper. Locate the right one (in my case, a RAID1 array, so /dev/mapper/ldm_vol_VOLNAMEHERE-Dg0_Volume2), and just mount it with something like mount -t ntfs /dev/mapper/ldm_vol_VOLNAMEHERE-Dg0_Volume2.

To have this done automatically at boot time, you will likely need to insert a call to ldm create all at the right point in the boot sequence, before the contents of /etc/fstab is mounted. A good way of doing the call would be:

[ -x /usr/bin/ldmtool ] && ldmtool create all >/dev/null || true

But how to get this snippet to run at the right time during boot will vary a lot depending on the distribution you are using. For Ubuntu 13.10, I inserted said line in /etc/init/mountall.conf, right before the exec mountall ... call at the end of the script section. And I can now mount my Windows LDM RAID1 partition in /etc/fstab. Enjoy!

Slapbang answered 1/3, 2014 at 0:59 Comment(2)
This is the correct answer and should be marked as suchRees
This looks like a perfect solution, and yet in some recent Linuxes this sometimes just fails ("create all" command just says "Killed", and no volume devices are created). There are no helpful error messages, etc. It turns out the problem can be a kernel bug described here: patchwork.kernel.org/patch/9654437 . I recompiled the kernel with the patch in the link, and ldmtool started working.Fluoride
D
12

Allright, I will reply to my own question to avoid the same pain to others.

0. WARNING

In case you are doing a recovery, ALWAYS COPY YOUR DATA and work on the copy. Do NOT alter the original 'broken' data. That thing said, keep reading.

1. Your partition looks like ...

Install sleuth kit and testdisk. Hopefully there will packages for your distro :)

# mmls -t gpt LUN01
GUID Partition Table (EFI)
Offset Sector: 0
Units are in 512-byte sectors

    Slot    Start        End          Length       Description
00:  Meta    0000000000   0000000000   0000000001   Safety Table
01:  -----   0000000000   0000000033   0000000034   Unallocated
02:  Meta    0000000001   0000000001   0000000001   GPT Header
03:  Meta    0000000002   0000000033   0000000032   Partition Table
04:  00      0000000034   0000002081   0000002048   LDM metadata partition
05:  01      0000002082   0000262177   0000260096   Microsoft reserved partition
06:  02      0000262178   1048576966   1048314789   LDM data partition
07:  -----   1048576967   1048576999   0000000033   Unallocated

Note: testdisk will give you the same info with less details # testdisk /list LUN01

2. Extract disks metadata

All information about the disk order, data size and other ciphered attributes about the partition will be found in the LDM metadata partition. W2k8 has not changed so much since this document [2] albeit some sizes are different and some attributes are new (and obviously unknown)...

# dd if=LUN01 skip=33 count=2048 |xxd -a > lun01.metadata
# less lun01.metadata 

At line 0002410 you should see the name of the server. Reassuring ? But we are after the disks order and disk ID. Scroll down.

2.1. Disks Order

At line 0003210 you should see 'Disk1' followed by a long string.

0003200: 5642 4c4b 0000 001c 0000 0006 0000 0001  VBLK............
0003210: 0000 0034 0000 003a 0102 0544 6973 6b31  ...4...:...Disk1
0003220: 2437 3965 3830 3239 332d 3665 6231 2d31  $79e80293-6eb1-1
0003230: 3164 662d 3838 6463 2d30 3032 3662 3938  1df-88dc-0026b98
0003240: 3335 6462 3300 0000 0040 0000 0000 0000  35db3....@......
0003250: 0048 0000 0000 0000 0000 0000 0000 0000  .H..............

This means that the first disk of this Volume is identfied by the following Unique ID (UID) : 79e80293-6eb1-11df-88dc-0026b9835db3 But at the moment, we don't know which of the disk has this UID ! So move to the Disk2 entry and take note of its UID and so on for all the disks you had in your volume. Note: Based on my experience only the first 8 characters are changing, the rest stays the same. Indeed, W2k8 seems to increment the ID by 6. $ is a separator.

Eg. :

Windows Disk1 UID : 79e80293-6eb1-11df-88dc-0026b9835db3
Windows Disk2 UID : 79e80299-...
Windows Disk3 UID : 79e8029f-...

2.2. Find Disk UID

Go to line 00e8200 (lun01.metadata). You should find 'PRIVHEAD'.

00e8200: 5052 4956 4845 4144 0000 2c41 0002 000c  PRIVHEAD..,A....
00e8210: 01cc 6d37 2a3f c84e 0000 0000 0000 0007  ..m7*?.N........
00e8220: 0000 0000 0000 07ff 0000 0000 0000 0740  ...............@
00e8230: 3739 6538 3032 3939 2d36 6562 312d 3131  79e80299-6eb1-11
00e8240: 6466 2d38 3864 632d 3030 3236 6239 3833  df-88dc-0026b983
00e8250: 3564 6233 0000 0000 0000 0000 0000 0000  5db3............
00e8260: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00e8270: 3162 3737 6461 3230 2d63 3731 372d 3131  1b77da20-c717-11
00e8280: 6430 2d61 3562 652d 3030 6130 6339 3164  d0-a5be-00a0c91d
00e8290: 6237 3363 0000 0000 0000 0000 0000 0000  b73c............
00e82a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00e82b0: 3839 3164 3065 3866 2d64 3932 392d 3131  891d0e8f-d929-11
00e82c0: 6530 2d61 3861 372d 3030 3236 6239 3833  e0-a8a7-0026b983
00e82d0: 3564 6235 0000 0000 0000 0000 0000 0000  5db5............
00e82e0: 0000 0000 0000 0000 0000 0000 0000 0000  ................

What we are after is the disk UID of this particular disk. We see: - Disk Id : 79e80299-6eb1-11df-88dc-0026b9835db3 - Host Id : 1b77da20-c717-11d0-a5be-00a0c91db73c - Disk Group Id : 891d0e8f-d929-11e0-a8a7-0026b9835db5

So this disk with the UID 79e80299-... is Windows Disk2 but for us it was Physical Disk 1. Indeed find this UID in the disk order you found above. Note: There is no logical order. I mean Windows decide how to setup the disk order not you. So there is NO human logic and don't expect your first disk to be Disk1.

So don't assume that the order above is going to follow any human logic. I recommend you to go through all the LDM data of your disks and extract their UID. (You can use the following command to just extract the PRIVHEAD info: dd if=LUNXX skip=1890 count=1 |xxd -a)

e.g:

(Windows) Disk1 : 79e80293-... == Physical disk 2
(Windows) Disk2 : 79e80299-... == Physical disk 1
(Windows) Disk3 : 79e8029f-... == Physical disk 3

I am sure that somewhere in the LDM metadata you can find the type of Volume (spanned, RAID0, RAIDX, and the associated stripe sizes) However, I haven't dug it. I used a 'try and retry' method to find my data. So if you know how you setup your configuration before the drama, you will save yourself a lot of time.

3. Find the NTFS filesystem and your data

Now we are interested in the big chunk of data we want to restore. In my case it's ~512GB of data so we won't convert the whole in ASCII. I haven't really search how Windows find the beginning of its NTFS partition. But what I found is that it logically starts with the following keyword : R.NTFS. Let's find this and find the offset we will have to apply later to see our NTFS FS.

06:  02      0000262178   1048576966   1048314789   LDM data partition

In this example, the data starts at 262178 and is 1048314789 sectors long

We found above that Disk1 (of the volume group) is actually the 2nd physical disk. We will extract some of its information to find where the NTFS partition start.

# dd if=LUN02 skip=262178 count=4096 |xxd -a > lun02.DATASTART-4k
# less lun02.DATASTART-4k

0000000: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
00fbc00: eb52 904e 5446 5320 2020 2000 0208 0000  .R.NTFS    .....
00fbc10: 0000 0000 00f8 0000 3f00 ff00 0008 0400  ........?.......
00fbc20: 0000 0000 8000 8000 ffaf d770 0200 0000  ...........p....

Here we can see that NTFS starts at 00fbc00. So knowing that we can start to extract our data from sector 262178 + 00fbc00 bytes. Let's do a bit of hexadecimal to decimal conversion with bytes to sector conversion as well.

0xfbc00 bytes = 1031168 bytes = 1031168/512 sectors = 2014 sectors

So our NTFS partition starts at 262178 + 2014 = 264192 sectors. This value is going to be an offset we will use later on all disks. Let's called it the NTFS offset. Obviously the total size is shrinked by the offset. So the new size is: 1048314789 - 2014 = 1048312775 sectors

4. Try to mount/see the data

From now on, either it will work out of the box because your NTFS partition is healthy or it won't because you're doing this to recover some data. The following process is the same whatever is your status. All the following is based on [1] (see Links at the bottom)

A spanned volume, will fill a volume after another. Where as a striped (RAID0) will copy chunk of data over many disks (a.k.a a file is spread across many disks). In my case, I didn't know if it was a spanned or striped volume. The easiest way to know, if your volume is not full is to check if you have a lot of zeroes at then end of all your volumes. If that's the case then it's striped. Because if it's spanned, if will fill the first disk, then the second. I am not 100% sure of that but that's what I observed. So dd a bunch of sectors from the end of the LDM data partition.

4.0 Preparations to access your data

First mount your dd file or your device through a loopback device with the NTFS offset and the size we calculated above. However the offset and size must be in bytes not in sectors to be used with losetup. offset = 264192*512 = 135266304 size = 1048312775*512 = 536736140800

# losetup /dev/loop2 DDFILE_OR_DEVICE -o 135266304 --size 536736140800
# blockdev --getsize /dev/loop2
1048312775 <---- total size in sectors, same number than before

Note: you can add '-r' to mount in Read-Only mode.

Do the above for all the physical disks part of your volume. Display the result with: losetup -a Note: If you don't have enough loop devices you can easily create more with : # mknod -m0660 /dev/loopNUMBER b 7 NUMBER && chown root.disk /dev/loopNUMBER

Check your alignment by opening the first Disk of the group (eg: Disk2) to see if the first line is R.NTFS. If not then your alignment is wrong. Verify your calculations above and try again. Or you are not looking at the 1st Windows Disk

e.g:

First disk of the volume has been mounted on /dev/loop2 
# xxd /dev/loop2 |head
0000000: eb52 904e 5446 5320 2020 2000 0208 0000  .R.NTFS    ..... 
0000010: 0000 0000 00f8 0000 3f00 ff00 0008 0400  ........?.......

All good. Let's move to the annoying part :)

4.1 Spanned

Spanned disks are actually a chain of disks. You fill the first then you use the second one and so and so forth. Create a file which look like this, eg :

# Offset into   Size of this    Raid type       Device          Start sector
# volume        device                                          of device
0               1048312775  linear          /dev/loop2       0
1048312775      1048312775  linear          /dev/loop1       0
2096625550      1048312775  linear          /dev/loop3       0

Notes: - Remember to use the good disk order (you found before). eg: Physical Disk2 followed by Physical Disk1 and Physical Disk3 - 2096625550 = 2*1048312775 and obviously if you have a fourth disk it's gonna be 3 times the size for the offset for the 4th disk.

4.2 Striped

The problem with striped mode (aka RAID0) is you must know what is your stripe size. Apparently by default it is 64k (in my case it was 128k but I dunno if it was tuned by the Windows sysadmin:). Anyway if you don't know it, you just have to try all the possible standard values and see which one gives you a possible viable NTFS filesystem.

Create a file like the following for 3 disks with a 128k chunk size

                       .---+--> 3 chunks of 128k
0 3144938240  striped  3  128      /dev/loop2 0 /dev/loop3 0 /dev/loop1 0 
   `---> total size of the volume      `----------+-----------+---> disk order

/!\ : Size of the volume is not exactly the size we calculated before. dmsetup needs a volume size divisible by the chunk size (aka stripe size) AND by the number of disks in the volume. So in our case. We have 3 disks of 1048312775 sectors So the 'normal' size is 1048312775*3=3144938325 sectors but due to the above contraint we will recalculate the size and round it # echo "3144938325/128*128" | bc 3144938240 sectors

  So 3144938240 is the size of your volume in a striped scenario with 3 disk and
  128 chunks (aka stripes)

4.3 Mount it.

Now lets aggregate everything together with dmsetup :

# dmsetup create myldm /path/myconfigfile
# dmsetup ls
myldm       (253, 1)

# mount -t ntfs -o ro /dev/mapper/myldm /mnt 

If it does not mount. Then you can use testdisk :

# testdisk /dev/mapper/myldm
--> Analyse
----> Quick search
------> You should see the volume name (if any). If not it seems compromised :)
--------> Press 'P' to see files and copy with 'c'

5. Conclusion

The above worked for me. Your mileage may vary. And there is maybe a better and easier way to do it. If so, share it so nobody else will have to go through this hassle :) Also, it may look hard but it is not. As long as you copy your data somewhere, just try and retry until you can see something. It took me 3 days to understand how to put all the bits together. Hopefully the above will help you to not waste 3 days.

Note: All examples above have been made up. There is maybe some inconsistencies between the examples despite my thoroughness ;)

Good luck.

6. Links

Diaphoretic answered 19/12, 2011 at 4:49 Comment(3)
This answer is still relevant even in 2021. Ldmtool was unable to detect my striped device which had a corrupted metadata partition. Following this answer I was able to recover 1.5TB of data which included some old meaningful data that I had not yet backed up. Thank you so much for the effort and detail you put into this answer!Dishonesty
Glad to see that it helps people, even now! :)Diaphoretic
Oof, just realised, it was 10 years ago! Time flies!Diaphoretic
M
2

Windows Dynamic Volume 5x disk, spanned, 8TB total.

This is what i've gathered from the answer above, and by referencing [1] and [2].

What I discovered is that there is more than just the, disk order GUID, infomation in the metadata partition. There is a clear structure that contains size, offset and offset within spanned volumne.

Use the answer above section {2.1} and {2.2} to determine the order of drives.

My 4x disks are exported as 4x 2tb chunks and 1x smaller chunk from a single RAID5 array from a 3ware 9650se controller. Each disk is in the format of;

/dev/sdX1 = LDM metadata partition (~1mb)
/dev/sdX2 = Reserved msoft partition (~100mb)
/dev/sdX1 = LDM data partition (~1.99TB/20GB)

from a 'xxd -a -l 65535 /dev/sdd1 | more' I get

0002800: 5642 4c4b 0000 000c 0000 000e 0000 0001  VBLK............
0002810: 0000 4033 0000 0031 0109 0844 6973 6b31  [email protected]
0002820: 2d30 3100 0000 0000 0000 0000 0000 0b00  -01.............
0002830: 0000 0000 0007 de00 0000 0000 0000 0004  ................
                     ^---^ Note 07 de (offset)
0002840: fffb f000 0108 0102 0000 0000 0000 0000  ................
         ^-------^ Note fffb f000 (size)
0002850: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
0002880: 5642 4c4b 0000 000d 0000 000f 0000 0001  VBLK............
0002890: 0000 4033 0000 0031 010a 0844 6973 6b32  [email protected]
00028a0: 2d30 3100 0000 0000 0000 0000 0000 0b00  -01.............
00028b0: 0000 0000 0007 de00 0000 00ff fbf0 0004  ................
                     ^---^ Offset   ^--------^ Now see spanned offset
00028c0: fffb f000 0108 0103 0000 0000 0000 0000  ................
         ^-------^ note size again!
00028d0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
0002900: 5642 4c4b 0000 000e 0000 0010 0000 0001  VBLK............
0002910: 0000 4033 0000 0031 010b 0844 6973 6b33  [email protected]
0002920: 2d30 3100 0000 0000 0000 0000 0000 0b00  -01.............
0002930: 0000 0000 0007 de00 0000 01ff f7e0 0004  ................
                     ^---^ Offset   ^--------^ Now see spanned offset
0002940: fffb f000 0108 0104 0000 0000 0000 0000  ................
         ^-------^ note size again!
0002950: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
0002980: 5642 4c4b 0000 000f 0000 0011 0000 0001  VBLK............
0002990: 0000 4033 0000 0031 010c 0844 6973 6b34  [email protected]
00029a0: 2d30 3100 0000 0000 0000 0000 0000 0b00  -01.............
00029b0: 0000 0000 0007 de00 0000 02ff f3d0 0004  ................
                     ^---^ Offset   ^--------^ Now see spanned offset
00029c0: fffb f000 0108 0105 0000 0000 0000 0000  ................
         ^-------^ note size again!
00029d0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
0002a00: 5642 4c4b 0000 0010 0000 0012 0000 0001  VBLK............
0002a10: 0000 4033 0000 0031 010d 0844 6973 6b35  [email protected]
0002a20: 2d30 3100 0000 0000 0000 0000 0000 0b00  -01.............
0002a30: 0000 0000 0007 de00 0000 03ff efc0 0004  ................ 
                     ^---^ Offset   ^--------^ Now see spanned offset
0002a40: 17b7 d000 0108 0106 0000 0000 0000 0000  ................ 
         ^-------^ And my final drive is the smallest
0002a50: 0000 0000 0000 0000 0000 0000 0000 0000  ................

So, from above you can clearly see the size of the data section, offset within partition and the offset within the spanned volume. So lets do the maths;

Disk1:
Size of block = fffb f000 = 4294701056
Start offset = 07 de = 2014
Partition offset = 00 0000 00 = 0

Disk2:
Size of block = fffb f000 = 4294701056
Start offset = 07 de = 2014
Partition offset = 00ff fbf0 00 = 4294701056

Disk3:
Size of block = fffb f000 = 4294701056
Start offset = 07 de = 2014
Partition offset = 01ff fbf0 00 = 8589402112

Disk4:
Size of block = fffb f000 = 4294701056
Start offset = 07 de = 2014
Partition offset = 02ff fbf0 00 = 12884103168

Disk5:
Size of block = 17b7 d000 = 397922304
Start offset = 07 de = 2014
Partition offset = 03ff fbf0 00 = 17178804224

*Note: Use Excel, hex2dec() function*

This translated with dmraid to:

# File /etc/ntfsvolume
#offset into    Size of this    Raid    Device          Start sector
# volume                        type                    in volume
0               4294701056      linear  /dev/sdd3       2014
4294701056      4294701056      linear  /dev/sdc3       2014
8589402112      4294701056      linear  /dev/sdf3       2014
12884103168     4294701056      linear  /dev/sde3       2014
17178804224     397922304       linear  /dev/sdg3       2014

which can then be directly mounted via:

$ dmsetup create myvolume /etc/ntfsvolume
$ sudo mkdir /media/volume/
$ mount -t ntfs-3g /dev/mapper/myvolume /media/volume
$ sudo mount -t ntfs-3g -o ro /dev/mapper/myvolume /media/volume (mount read-only)

which requires modules:

dmraid
ntfs-3g

WARNING!

Be absolutely sure that you have all offsets, size on disk and spann offsets correct before mounting read-write. ntfs-3g will mount if the offsets are wrong, and your file contents will not be correct.

A good double check is to use windows check disk and loop at the extra information at the end. Note the total number of allocated units, multiple that by the block size (mine was 4096) then divide that by 512 (normal sector size). this should match to the windows reported size.

My partition size reports wrong by 4096 bytes smaller than the size indicated by the above metadata tables. I'm assuming that the partition size rounds to an even number. I calculate 2197090816, windows says 2197090815, 4096 byte blocks..

References

Multiflorous answered 20/7, 2012 at 11:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.