编辑:
此 wiki中的场景,其中 1 个驱动器的事件计数略低于阵列的其余部分,另一个驱动器的事件计数显着低于阵列的其余部分,建议在组装--force
时省略最旧的驱动器,然后添加它(或者如果磁盘是新的驱动器,则实际上很糟糕)在阵列以降级状态组装后返回。
在我的情况下这样做是否有意义,或者--force
考虑到两个过时的驱动器具有相同的事件计数,尝试与所有 4 个驱动器进行组装是否更可取?
鉴于我有限的 RAID 知识,我想我会在尝试任何事情之前询问我的具体情况。丢失这 4 个驱动器上的数据对我来说并不是世界末日,但找回它仍然很好。
我最初将 RAID5 阵列从旧机器迁移到新机器,没有任何问题。我用了大约 2 天,直到我注意到 2 个驱动器没有在 BIOS 引导屏幕中列出。由于进入 linux 后阵列仍然组装并且工作正常,我并没有考虑太多。
第二天阵列停止工作,所以我连接了一张 PCI-e SATA 卡并更换了我所有的 SATA 电缆。之后,所有 4 个驱动器都出现在 BIOS 启动屏幕中,所以我假设我的电缆或 SATA 端口导致了最初的问题。
现在我留下了一个损坏的数组。mdadm --assemble
将两个驱动器列为(possibly out of date)
,并mdadm --examine
显示22717
过期驱动器和23199
其他两个驱动器的事件。这个 wiki 条目表明<50
可以通过组装来克服事件计数差异--force
,但是我的 4 个驱动器被482
事件分开。
以下是所有相关的突袭信息。在阵列发生故障之前,我知道所有 4 个驱动器的主 GPT 表都已损坏,但由于当时一切正常,我还没有解决这个问题。
mdadm --assemble --scan --verbose
mdadm: /dev/sde is identified as a member of /dev/md/guyyst-server:0, slot 2.
mdadm: /dev/sdd is identified as a member of /dev/md/guyyst-server:0, slot 3.
mdadm: /dev/sdc is identified as a member of /dev/md/guyyst-server:0, slot 1.
mdadm: /dev/sdb is identified as a member of /dev/md/guyyst-server:0, slot 0.
mdadm: added /dev/sdb to /dev/md/guyyst-server:0 as 0 (possibly out of date)
mdadm: added /dev/sdc to /dev/md/guyyst-server:0 as 1 (possibly out of date)
mdadm: added /dev/sdd to /dev/md/guyyst-server:0 as 3
mdadm: added /dev/sde to /dev/md/guyyst-server:0 as 2
mdadm: /dev/md/guyyst-server:0 assembled from 2 drives - not enough to start the array.
mdadm --examine /dev/sd[bcde]
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 356cd1df:3a5c992d:c9899cbc:4c01e6d9
Name : guyyst-server:0
Creation Time : Wed Mar 27 23:49:58 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 7ea39918:2680d2f3:a6c3b0e6:0e815210
Internal Bitmap : 8 sectors from superblock
Update Time : Fri May 1 03:53:45 2020
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 76a81505 - correct
Events : 22717
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 356cd1df:3a5c992d:c9899cbc:4c01e6d9
Name : guyyst-server:0
Creation Time : Wed Mar 27 23:49:58 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 119ed456:cbb187fa:096d15e1:e544db2c
Internal Bitmap : 8 sectors from superblock
Update Time : Fri May 1 03:53:45 2020
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : d285ae78 - correct
Events : 22717
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 356cd1df:3a5c992d:c9899cbc:4c01e6d9
Name : guyyst-server:0
Creation Time : Wed Mar 27 23:49:58 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 2670e048:4ebf581d:bf9ea089:0eae56c3
Internal Bitmap : 8 sectors from superblock
Update Time : Fri May 1 04:12:18 2020
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 26662f2e - correct
Events : 23199
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 356cd1df:3a5c992d:c9899cbc:4c01e6d9
Name : guyyst-server:0
Creation Time : Wed Mar 27 23:49:58 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 093856ae:bb19e552:102c9f77:86488154
Internal Bitmap : 8 sectors from superblock
Update Time : Fri May 1 04:12:18 2020
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 40917946 - correct
Events : 23199
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 4
Persistence : Superblock is persistent
State : inactive
Working Devices : 4
Name : guyyst-server:0
UUID : 356cd1df:3a5c992d:c9899cbc:4c01e6d9
Events : 23199
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 32 - /dev/sdc
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
fdisk -l
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 79F4A900-C9B7-03A9-402A-7DDE6D72EA00
Device Start End Sectors Size Type
/dev/sdb1 2048 7814035455 7814033408 3.7T Microsoft basic data
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 43B95B20-C9B1-03A9-C856-EE506C72EA00
Device Start End Sectors Size Type
/dev/sdc1 2048 7814035455 7814033408 3.7T Microsoft basic data
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1E276A80-99EA-03A7-A0DA-89877AE6E900
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 11BD8020-C9B5-03A9-0860-6F446D72EA00
Device Start End Sectors Size Type
/dev/sde1 2048 7814035455 7814033408 3.7T Microsoft basic data
smartctl -a -d ata /dev/sd[bcde]
作为pastebin,因为它超过了字符限制:https ://pastebin.com/vMVCX9EH
一般来说,在这种情况下,您必须预料到数据会丢失。四个磁盘中有两个在大致相同的时间点从 RAID 中弹出。组装回来后,您将拥有一个损坏的文件系统。
如果可能的话,我只会在 - 将
dd
所有磁盘作为备份重新开始之后再进行试验。使用所有 4 个磁盘将允许您识别哪些块不同(因为那里的校验和不匹配),但它不会帮助您计算正确的状态。您可以
checkarray
在强制重新组装所有 4 个之后开始,然后在/sys/block/mdX/md/mismatch_cnt
. 估计文件系统的“损坏程度”可能会或可能不会有趣。重建阵列只能使用三个磁盘的信息来重新计算奇偶校验。由于弹出的磁盘具有相同的事件计数,因此使用任何一个弹出的磁盘都会导致重新计算相同(部分错误)的部分信息。