AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题 / 1081241
Accepted
newbie
newbie
Asked: 2021-10-22 01:16:22 +0800 CST2021-10-22 01:16:22 +0800 CST 2021-10-22 01:16:22 +0800 CST

降级后的 RAID1 恢复

  • 772

下面是我的 2 磁盘 Raid1 阵列的 lsblk、mdadm 和 /proc/mdstat 的输出

anand@ironman:~$ lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                         8:0    0 465.8G  0 disk  
|-sda1                      8:1    0   976M  0 part  
| `-md0                     9:0    0 975.4M  0 raid1 
|   `-vg_boot-boot (dm-6) 253:6    0   972M  0 lvm   /boot
`-sda2                      8:2    0 464.8G  0 part  
sdb                         8:16   0 465.8G  0 disk  
|-sdb1                      8:17   0   976M  0 part  
`-sdb2                      8:18   0 464.8G  0 part  
  `-md1                     9:1    0 464.7G  0 raid1 
    |-vg00-root (dm-0)    253:0    0  93.1G  0 lvm   /
    |-vg00-home (dm-1)    253:1    0  96.6G  0 lvm   /home
    |-vg00-var (dm-2)     253:2    0  46.6G  0 lvm   /var
    |-vg00-usr (dm-3)     253:3    0  46.6G  0 lvm   /usr
    |-vg00-swap1 (dm-4)   253:4    0   7.5G  0 lvm   [SWAP]
    `-vg00-tmp (dm-5)     253:5    0   952M  0 lvm   /tmp

anand@ironman:~$ cat /proc/mdstat
Personalities : [raid1] 
md1 : active raid1 sdb2[1]
      487253824 blocks super 1.2 [2/1] [_U]
      
md0 : active raid1 sda1[0]
      998848 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

anand@ironman:~$ sudo mdadm -D /dev/md0 /dev/md1
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 22 21:00:35 2013
     Raid Level : raid1
     Array Size : 998848 (975.60 MiB 1022.82 MB)
  Used Dev Size : 998848 (975.60 MiB 1022.82 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 21 14:35:36 2021
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ironman:0  (local to host ironman)
           UUID : cbcb9fb6:f7727516:9328d30a:0a970c9b
         Events : 4415

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       0        0        1      removed
/dev/md1:
        Version : 1.2
  Creation Time : Wed May 22 21:00:47 2013
     Raid Level : raid1
     Array Size : 487253824 (464.68 GiB 498.95 GB)
  Used Dev Size : 487253824 (464.68 GiB 498.95 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 21 14:35:45 2021
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ironman:1  (local to host ironman)
           UUID : 3f64c0ce:fcb9ff92:d5fd68d7:844b7e12
         Events : 63025777

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2

用于从 raid1 故障中恢复的命令是什么?

我是否必须获得一个新硬盘才能安全地重新组装 raid1 设置?

更新1:

    anand@ironman:~$ sudo smartctl -H /dev/sda 
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Please note the following marginal Attributes:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
190 Airflow_Temperature_Cel 0x0022   054   040   045    Old_age   Always   In_the_past 46 (0 174 46 28)

anand@ironman:~$ sudo smartctl -H /dev/sdb
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

anand@ironman:~$ 

智能信息:

smartctl -a -data /dev/sda 的输出 smartctl -a -data /dev/sdb 的输出

更新 2:

anand@ironman:~$ sudo blkid -o list
device                                  fs_type        label           mount point                                 UUID
-------------------------------------------------------------------------------------------------------------------------------------------------------
/dev/sda1                               linux_raid_member ironman:0    (in use)                                    cbcb9fb6-f772-7516-9328-d30a0a970c9b
/dev/sda2                               linux_raid_member ironman:1    (not mounted)                               3f64c0ce-fcb9-ff92-d5fd-68d7844b7e12
/dev/sdb1                               linux_raid_member ironman:0    (not mounted)                               cbcb9fb6-f772-7516-9328-d30a0a970c9b
/dev/sdb2                               linux_raid_member ironman:1    (in use)                                    3f64c0ce-fcb9-ff92-d5fd-68d7844b7e12
/dev/md0                                LVM2_member                    (in use)                                    JKI3Lr-VdDK-Ogsk-KOQk-jSKJ-udAV-Vt4ckP
/dev/md1                                LVM2_member                    (in use)                                    CAqW3D-WJ7g-2lbw-G3cn-nidp-2jdQ-evFe7r
/dev/mapper/vg00-root                   ext4           root            /                                           82334ff8-3eff-4fc7-9b86-b11eeda314ae
/dev/mapper/vg00-home                   ext4           home            /home                                       8e9f74dd-08e4-45a3-a492-d4eaf22a1d68
/dev/mapper/vg00-var                    ext4           var             /var                                        0e798199-3219-458d-81b8-b94a5736f1be
/dev/mapper/vg00-usr                    ext4           usr             /usr                                        d8a335fc-72e6-4b98-985e-65cff08c4e22
/dev/mapper/vg00-swap1                  swap                           <swap>                                      b95ee4ca-fcca-487f-b6ff-d6c0d49426d8
/dev/mapper/vg00-tmp                    ext4           tmp             /tmp                                        c879fae8-bd25-431d-be3e-6120d0381cb8
/dev/mapper/vg_boot-boot                ext4           boot            /boot                                       12684df6-6c4a-450f-8ed1-d3149609a149

-- 结束更新 2

更新 3 - 遵循 Nikita 的建议后:

/dev/md0:                                                                   │                                                                           
        Version : 1.2                                                       │                                                                           
  Creation Time : Wed May 22 21:00:35 2013                                  │                                                                           
     Raid Level : raid1                                                     │                                                                           
     Array Size : 998848 (975.60 MiB 1022.82 MB)                            │                                                                           
  Used Dev Size : 998848 (975.60 MiB 1022.82 MB)                            │                                                                           
   Raid Devices : 2                                                         │                                                                           
  Total Devices : 2                                                         │                                                                           
    Persistence : Superblock is persistent                                  │                                                                           
                                                                            │                                                                           
    Update Time : Fri Oct 22 21:20:09 2021                                  │                                                                           
          State : clean                                                     │                                                                           
 Active Devices : 2                                                         │                                                                           
Working Devices : 2                                                         │                                                                           
 Failed Devices : 0                                                         │                                                                           
  Spare Devices : 0                                                         │                                                                           
                                                                            │                                                                           
           Name : ironman:0  (local to host ironman)                        │                                                                           
           UUID : cbcb9fb6:f7727516:9328d30a:0a970c9b                       │                                                                           
         Events : 4478                                                      │                                                                           
                                                                            │                                                                           
    Number   Major   Minor   RaidDevice State                               │                                                                           
       0       8        1        0      active sync   /dev/sda1             │                                                                           
       2       8       17        1      active sync   /dev/sdb1   



anand@ironman:~/.scripts/automatem/bkp$ sudo mdadm -D /dev/md1              │                                                                           
/dev/md1:                                                                   │                                                                           
        Version : 1.2                                                       │                                                                           
  Creation Time : Wed May 22 21:00:47 2013                                  │                                                                           
     Raid Level : raid1                                                     │                                                                           
     Array Size : 487253824 (464.68 GiB 498.95 GB)                          │                                                                           
  Used Dev Size : 487253824 (464.68 GiB 498.95 GB)                          │                                                                           
   Raid Devices : 2                                                         │                                                                           
  Total Devices : 2                                                         │                                                                           
    Persistence : Superblock is persistent                                  │                                                                           
                                                                            │                                                                           
    Update Time : Fri Oct 22 21:21:37 2021                                  │                                                                           
          State : clean                                                     │                                                                           
 Active Devices : 2                                                         │                                                                           
Working Devices : 2                                                         │                                                                           
 Failed Devices : 0                                                         │                                                                           
  Spare Devices : 0                                                         │                                                                           
                                                                            │                                                                           
           Name : ironman:1  (local to host ironman)                        │                                                                           
           UUID : 3f64c0ce:fcb9ff92:d5fd68d7:844b7e12                       │                                                                           
         Events : 63038935                                                  │                                                                           
                                                                            │                                                                           
    Number   Major   Minor   RaidDevice State                               │                                                                           
       2       8       18        0      active sync   /dev/sdb2             │                                                                           
       1       8       34        1      active sync   /dev/sdc2 

谢谢你们!

阿南德

mdadm raid1
  • 1 1 个回答
  • 108 Views

1 个回答

  • Voted
  1. Best Answer
    Nikita Kipriyanov
    2021-10-22T03:10:03+08:002021-10-22T03:10:03+08:00

    看来你的两个磁盘都快死了:

    /dev/sda:
      4 Start_Stop_Count        0x0032   096   096   020    Old_age   Always       -       5039
      5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       240
    187 Reported_Uncorrect      0x0032   079   079   000    Old_age   Always       -       21
    195 Hardware_ECC_Recovered  0x001a   044   015   000    Old_age   Always       -       26908616
    
    /dev/sdb:
      4 Start_Stop_Count        0x0012   099   099   000    Old_age   Always       -       4911
      5 Reallocated_Sector_Ct   0x0033   088   088   005    Pre-fail  Always       -       90
    196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       114
    197 Current_Pending_Sector  0x0022   001   001   000    Old_age   Always       -       9640
    

    所以,再一次,永远不要相信它所说的关于自己的东西,它在撒谎!

    您需要连接第三个磁盘,对其进行分区并将其添加到您的 RAID 中。等到它完成重建。在那里安装引导加载程序。然后删除这两个失败的,并连接第四个并再次复制以恢复冗余。

    并设置定期检查和监控,以避免将来出现这种危险情况。


    看到带有 LVM 的单独引导 RAID 阵列令人惊讶。很不寻常。单独引导分区的最初目的是不将其放在 LVM 中,以便更容易访问(早期的引导加载程序对 LVM 一无所知,所以这是一个要求)。

    • 1

相关问题

  • 将 Linux 软件 RAID 1 扩展到 RAID 10 的最佳方法

  • 如何在没有备用的情况下创建软件 raid5 阵列

  • mdadm raid5 超级块丢失

  • 如何将 Linux 软件 RAID 移动到新机器上?

  • 有没有好的图形或基于 Web 的 md 状态或管理工具?

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve