AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题 / 1131147
Accepted
Charles Chou
Charles Chou
Asked: 2023-05-15 09:42:52 +0800 CST2023-05-15 09:42:52 +0800 CST 2023-05-15 09:42:52 +0800 CST

意外将最后一个 2 驱动器重新初始化为 RAID0 后恢复 4 驱动器 RAID5 阵列

  • 772

我有一个运行 RAID 5 4 驱动器的 Asustor NAS,在系统更新后它重新启动到 Web 控制台中的初始化页面,我认为这是升级过程的一部分所以我开始初始化进程,几分钟后我觉得不对劲并且拔掉电源,NAS 启动到一个干净的操作系统,所有设置都已经消失,无法挂载 RAID。

在终端中检查 mdadm 和 fdisk 后,我发现最后 2 个驱动器已重新初始化为 RAID 0 阵列(sdc4、sdd4)。

我曾尝试组装原始 RAID 但没有成功

# mdadm --assemble /dev/mdx /dev/sd*4
mdadm: superblock on /dev/sdc4 doesn't match others - assembly aborted

mdadm --examine /dev/sd* 这是原始 RAID的结果应该是 [sda4,sdb4, sdc4 , sdd4 ] UUID: 1ba5dfd1:e861b791:eb307ef1:4ae4e4ad 8T。
意外创建的 raid0 是 [ sdc4 , sdd4 ] UUID: 06b57325:241ba722:6dd303af:baaa5e4e

/dev/sda:
   MBR Magic : aa55
Partition[0] :       522240 sectors at         2048 (type 83)
Partition[3] :         2047 sectors at            1 (type ee)
mdadm: No md superblock detected on /dev/sda1.
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 1c90030d:10445d9f:d39fc32a:06d4b79a
           Name : AS1004T-7CBC:0  (local to host AS1004T-7CBC)
  Creation Time : Sun Jun 11 10:56:28 2017
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : active
    Device UUID : cca1545a:14112668:0ebd0ed3:df55018d

    Update Time : Sun Oct 13 01:05:27 2019
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 95866108 - correct
         Events : 228987


   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
           Name : AS1004T-7CBC:126  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:45 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : f3836318:4899a170:a0018b8b:1aa428ab

    Update Time : Sun May 14 14:40:28 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 48f1cfbb - correct
         Events : 92


   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 1ba5dfd1:e861b791:eb307ef1:4ae4e4ad
           Name : AS1004T-7CBC:1  (local to host AS1004T-7CBC)
  Creation Time : Sun Jun 11 10:56:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
     Array Size : 8777035776 (8370.43 GiB 8987.68 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 6a18260d:f0d1b882:5608a7e4:8eeabe1f

    Update Time : Sun May 14 09:31:25 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 6e46beec - correct
         Events : 213501

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
   MBR Magic : aa55
Partition[0] :       522240 sectors at         2048 (type 83)
Partition[3] :         2047 sectors at            1 (type ee)
mdadm: No md superblock detected on /dev/sdb1.
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 1c90030d:10445d9f:d39fc32a:06d4b79a
           Name : AS1004T-7CBC:0  (local to host AS1004T-7CBC)
  Creation Time : Sun Jun 11 10:56:28 2017
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : active
    Device UUID : 648f0d6d:967f432c:3b9e1ceb:d15959c2

    Update Time : Sun Oct 13 01:05:27 2019
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : b9c2a23f - correct
         Events : 228987


   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
           Name : AS1004T-7CBC:126  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:45 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : 8adc82c0:010edc11:5702a9f6:7287da86

    Update Time : Sun May 14 14:40:28 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : d91b8119 - correct
         Events : 92


   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 1ba5dfd1:e861b791:eb307ef1:4ae4e4ad
           Name : AS1004T-7CBC:1  (local to host AS1004T-7CBC)
  Creation Time : Sun Jun 11 10:56:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
     Array Size : 8777035776 (8370.43 GiB 8987.68 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 15bd0bdb:b5fdcfaf:94729f61:ed9e7bea

    Update Time : Sun May 14 09:31:25 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : b0f8adf8 - correct
         Events : 213501

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :       522240 sectors at         2048 (type 83)
Partition[3] :         2047 sectors at            1 (type ee)
mdadm: No md superblock detected on /dev/sdc1.
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 14d010c5:aaed7a5c:30956792:cfd0c452
           Name : AS1004T-7CBC:0  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:35 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : 373358f6:76ca625d:e9193081:216676cb

    Update Time : Sun May 14 14:37:42 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : ba188081 - correct
         Events : 880


   Device Role : Active device 1
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
           Name : AS1004T-7CBC:126  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:45 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : 737541e2:f5a3673d:8db35b12:2db86324

    Update Time : Sun May 14 14:40:28 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : dfa191e3 - correct
         Events : 92


   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 06b57325:241ba722:6dd303af:baaa5e4e
           Name : AS1004T-7CBC:1  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:51:00 2023
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : d73a946c:9aa8e26e:c4388d7a:566dcf90

    Update Time : Sun May 14 09:51:00 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 9bd7221c - correct
         Events : 0

     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :       522240 sectors at         2048 (type 83)
Partition[3] :         2047 sectors at            1 (type ee)
mdadm: No md superblock detected on /dev/sdd1.
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 14d010c5:aaed7a5c:30956792:cfd0c452
           Name : AS1004T-7CBC:0  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:35 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : acfa8c63:b226e810:3640a42a:9f8b72b1

    Update Time : Sun May 14 14:37:42 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 6a42effb - correct
         Events : 880


   Device Role : Active device 0
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
           Name : AS1004T-7CBC:126  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:50:45 2023
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : 1dd56ce1:770fa0d6:13127388:46c0d14f

    Update Time : Sun May 14 14:40:28 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 198ac3af - correct
         Events : 92


   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 06b57325:241ba722:6dd303af:baaa5e4e
           Name : AS1004T-7CBC:1  (local to host AS1004T-7CBC)
  Creation Time : Sun May 14 09:51:00 2023
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 7804860416 (3721.65 GiB 3996.09 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 1dece618:58743ad6:9f56922c:fa500120

    Update Time : Sun May 14 09:51:00 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 6528b89e - correct
         Events : 0

     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

我有以下问题:

  • RAID 0 阵列的重新初始化是否覆盖了我的数据?
  • 我应该只对第 3 个驱动器进行零超级阻塞并重新组装前 3 个驱动器吗?
  • 由于前 2 个驱动器看起来不错,我可以从前 2 个驱动器恢复后 2 个驱动器的超级块吗?
  • 我想恢复 RAID 5 数据

我做了一个实验来检查 mdadm --create 是否在创建的数组上销毁(多少)数据,幸运的是不多。

root@osboxes:/home/osboxes# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sd{b,c,d,e}
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 100352K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@osboxes:/home/osboxes# mkfs.ext4 /dev/md0
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 301056 1k blocks and 75480 inodes
Filesystem UUID: 9f536c05-4178-4aa3-8b1a-c96f3c34de4e
Superblock backups stored on blocks: 
    8193, 24577, 40961, 57345, 73729, 204801, 221185

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done 

root@osboxes:/home/osboxes# mount /dev/md0 /mnt/
root@osboxes:/home/osboxes# dd if=/dev/urandom of=/mnt/test count=200000
200000+0 records in
200000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 0.860987 s, 119 MB/s
root@osboxes:/home/osboxes# md5sum /mnt/test 
5b6024b89c0facb25bfb3055b21c4042  /mnt/test
root@osboxes:/home/osboxes# umount /mnt/
root@osboxes:/home/osboxes# mdadm --stop md0
mdadm: stopped md0
root@osboxes:/home/osboxes# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sd{d,e} # the command finish instantly I dont think it have time to write 100MB data
mdadm: chunk size defaults to 512K
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:53:07 2023
mdadm: /dev/sde appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:53:07 2023
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@osboxes:/home/osboxes# mdadm --assemble --force /dev/md0 /dev/sd{b,c,d,e} 
mdadm: /dev/sdd is busy - skipping
mdadm: /dev/sde is busy - skipping
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: /dev/md0 is already in use.
root@osboxes:/home/osboxes# mdadm --stop md0
mdadm: stopped md0
root@osboxes:/home/osboxes# mdadm --assemble --force /dev/md0 /dev/sd{b,c,d,e} 
mdadm: superblock on /dev/sdd doesn't match others - assembly aborted
root@osboxes:/home/osboxes# mdadm --create /dev/md126 --assume-clean --raid-devices=4 --level=5  /dev/sd{b,c,d,e}
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:53:07 2023
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:53:07 2023
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=2 ctime=Mon May 15 02:55:14 2023
mdadm: /dev/sde appears to be part of a raid array:
       level=raid0 devices=2 ctime=Mon May 15 02:55:14 2023
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md126 started.
root@osboxes:/home/osboxes# mount /dev/md126 /mnt
root@osboxes:/home/osboxes# md5sum /mnt/
lost+found/ test        
root@osboxes:/home/osboxes# md5sum /mnt/test 
5b6024b89c0facb25bfb3055b21c4042  /mnt/test

但如果我创建文件系统并将文件写入新阵列,恢复阵列将损坏,但仍然可读。

root@osboxes:/home/osboxes# umount /mnt/
root@osboxes:/home/osboxes# mdadm --stop md126
mdadm: stopped md126
root@osboxes:/home/osboxes# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sd{d,e}
mdadm: chunk size defaults to 512K
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:57:09 2023
mdadm: /dev/sde appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:57:09 2023
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@osboxes:/home/osboxes# mdadm --assemble --force /dev/md0 /dev/sd{b,c,d,e} ^C
root@osboxes:/home/osboxes# mkfs.ext4 /dev/md0
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 200704 1k blocks and 50200 inodes
Filesystem UUID: c1ded6ea-d212-473a-a282-7c3dd4f6777e
Superblock backups stored on blocks: 
    8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

root@osboxes:/home/osboxes# mount /dev/md0 /mnt/
root@osboxes:/home/osboxes# ls /mnt/
lost+found
root@osboxes:/home/osboxes# echo test>/mnt/test
root@osboxes:/home/osboxes# umount /mnt/
root@osboxes:/home/osboxes# mdadm --stop md0
mdadm: stopped md0
root@osboxes:/home/osboxes# mdadm --create /dev/md126 --assume-clean --raid-devices=4 --level=5  /dev/sd{b,c,d,e}
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:57:09 2023
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid5 devices=4 ctime=Mon May 15 02:57:09 2023
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=2 ctime=Mon May 15 03:01:55 2023
mdadm: /dev/sde appears to be part of a raid array:
       level=raid0 devices=2 ctime=Mon May 15 03:01:55 2023
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md126 started.
root@osboxes:/home/osboxes# mount /dev/md126 /mnt/
root@osboxes:/home/osboxes# ls /mnt/
lost+found  test
root@osboxes:/home/osboxes# md5sum /mnt/test 
4d389d72a1db56e9d73cbe753fabf595  /mnt/test
software-raid
  • 1 1 个回答
  • 65 Views

1 个回答

  • Voted
  1. Best Answer
    Nikita Kipriyanov
    2023-05-15T13:57:09+08:002023-05-15T13:57:09+08:00

    If you don't have spare space resources to store data at least two times larger the raw size of your array, e.g. if you don't have at least 3 TB * 4 drives * 2 = 24 TB of free space to dedicate to the recovery operation, stop and lend this whole job to a professional data recovery service.

    Now, answers.

    1. If it ran mdadm --create without --assume-clean, yes, data was overwritten with zeros.

    2. No. You must not change anything on drives. Your first required step is making dumps (images) of all four members of the RAID.

    3. No. These superblocks are different. Something in them should be the same, while something differs. In particular, each superblock records the role (an ordered position) of this device in the array.

    As outlined in (1), most likely the data that was located in the beginning of the array is irreversibly destroyed (as if two drives from RAID5 were lost simultaneously). It might be possible to recover the "tail" which lies where you stopped the process. This doesn't necessarily mean you are able to recover the user data which is there, because the file system structures which live there presumably depend on the blocks that lie in the destroyed area too. But decent filesystem will have many replicas of the superblock which may happen to be in the non-damaged area so there is still a hope. You may try to reveal this non-damaged tail and recover what possible from it.

    1. Begin with taking required backup of all four devices as outlined in (2) (use e.g. dd or ddrescue). This will use half of your spare space.

    2. Then, you may proceed with re-creating the array with mdadm --create -e1.2 -l5 -n4 --assume-clean /dev/sd[abcd]4. Pay attention to the order of the drives, as I presented them in above command most likely in an incorrect order. You'll have to play a bit to guess the correct order; probably the correct order should be [cbda] or [dbca], because surviving devices have orders: sda4=3, sdb4=1 (taken from Device Role property). If you guessed it wrong, you'll have to copy the dumps back to drives and start over; this is what dumps were for, but see the tip below. Ideally this will require no more than 2 guesses. At most there are 4! = 1 * 2 * 3 * 4 = 24 different orderings of four drives.

    What you should expect is that the data in the end of the array happens to be clean. How to check that, depends on what you had there. Your array uses chunk size 64KiB, so you have to check that 64KiB stretches of data on the array turned out standing in a correct order. Again, see the tip to ease the guessing process.

    1. When the correct order is found, you dump the image of the assembled array to the remaining spare free space. Now you'll be carrying out the file system recovery operation on the array. If that's ext4, you might try to run e2fsck -b <superblock>, specifying the superblock which is in the non-damaged area; which one, you might guess by running mke2fs -n which simulate the creation of the file system without actually writing anything. Basically, what you'll get after this step is what was possible to recover.

    Tip. After taking required full dumps you may speed up guessing process by instantiating the read-write overlays so not to change the data on drives. You'll be only in the need to recreate overlay images instead of copying dumps back to drives in case of wrong guess, which is much faster than copying 12 TB. I described this in the other answer, but for your problem you make overlay not for the assembled array but for four individual devices and then build the array from layered nbdX devices.

    This also will let you skip dumping the file system image. You can make all 2 or even all 24 possible orders in these dumps simultaneously (which will require 8 or 96 overlay images and NBDs, respectively, but all those images will contain only changes and tend to not grow very much during recovery operations like this). Then try to recover filesystem on each one and see which one is correct. Then you remove all incorrect attempts, copy the contents of the file system onto the spare free space, remove the array on devices and re-create them anew and then copy survived data back.

    • 3

相关问题

  • 具有不同磁盘的 Linux RAID 5

  • USB 驱动器 RAID 阵列

  • windows 2003 软件 raid - 在没有 FT 软盘的情况下启动辅助磁盘

  • RAID - 软件与硬件

  • 用于 Windows 和 Linux 的 ICH9R 上的 RAID 配置和 3xHDD

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve