AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / unix / 问题 / 560885
Accepted
Eduardo Lucio
Eduardo Lucio
Asked: 2020-01-08 11:23:54 +0800 CST2020-01-08 11:23:54 +0800 CST 2020-01-08 11:23:54 +0800 CST

引导失败 - 在 RAID 1 (CentOS 7) 上扩展 LVM(“root”、“/”)

  • 772

问题:

在我们使用另一个 RAID 1 在 RAID 1 上扩展卷组 (VG) 后,CentOS 7 无法启动。我们使用的过程在PROCEDURE: Extend LVM ("root", "/") over RAID 1中演示。我们演示的过程中有什么错误和/或遗漏的?


语境:

我们正在尝试使用另外两个 RAID 1(软件)磁盘将卷组 (VG) 扩展到两个 RAID 1(软件)磁盘。


问题:

在我们扩展 VG(卷组)后,CentOS 7 无法启动。


过程:在 RAID 1 上扩展 LVM(“root”、“/”)

  • 格式化硬盘

运行以下 2 个命令,在添加的两个硬盘上创建新的 MBR 分区表...

parted /dev/sdc mklabel msdos
parted /dev/sdd mklabel msdos

重新加载“fstab”...

mount -a

使用 fdisk 命令在每个驱动器上创建一个新分区并将它们格式化为 Linux RAID 自动检测文件系统。首先在 /dev/sdc 上执行此操作。

fdisk /dev/sdc

按照这些说明...

  • 键入“n”创建一个新分区;
  • 键入“p”选择主分区;
  • 输入“1”创建/dev/sdb1;
  • 按 Enter 选择默认的第一个扇区;
  • 按 Enter 选择默认的最后一个扇区。该分区将跨越整个驱动器;
  • 输入“t”并输入“fd”将分区类型设置为Linux raid autodetect;
  • 键入“w”以应用上述更改。

注意:按照相同的说明在“/dev/sdd”上创建一个 Linux RAID 自动检测分区。

现在我们有两个 RAID 设备“/dev/sdc1”和“/dev/sdd1”。

  • 创建 RAID 1 逻辑驱动器

执行以下命令创建 RAID 1...

[root@localhost ~]# mdadm --create /dev/md125 --homehost=localhost --name=pv01 --level=mirror --bitmap=internal --consistency-policy=bitmap --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md125 started.

增加逻辑卷

[root@localhost ~]# pvcreate /dev/md125
  Physical volume "/dev/md125" successfully created.

我们通过添加我们之前使用“pvcreate”命令创建的“/dev/md125”(“RAID 1”)的物理卷来扩展“centosvg”卷组......

[root@localhost ~]# vgextend centosvg /dev/md125
  Volume group "centosvg" successfully extended

使用“lvextend”命令增加逻辑卷 - 将获取我们的原始逻辑卷并将其扩展到我们的新磁盘/分区/物理(“RAID 1”)卷“/dev/md125”...

[root@localhost ~]# lvextend /dev/centosvg/root /dev/md125
  Size of logical volume centosvg/root changed from 4.95 GiB (1268 extents) to <12.95 GiB (3314 extents).
  Logical volume centosvg/root successfully resized.

使用“xfs_growfs”命令调整文件系统的大小以利用此空间...

[root@localhost ~]# xfs_growfs /dev/centosvg/root
meta-data=/dev/mapper/centosvg-root isize=512    agcount=4, agsize=324608 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=1298432, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1298432 to 3393536
  • 保存我们的 RAID1 配置

此命令更新您的引导内核配置以匹配您系统的当前状态...

mdadm --detail --scan > /tmp/mdadm.conf
\cp -v /tmp/mdadm.conf /etc/mdadm.conf

更新 GRUB 配置,使其了解新设备...

grub2-mkconfig -o "$(readlink -e /etc/grub2.cfg)"

运行上述命令后,您应该运行以下命令以生成新的“initramfs”映像...

dracut -fv

错误:

开机失败!


基础设施/其他信息:

lsblk

[root@localhost ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                   8:0    0    8G  0 disk  
├─sda1                8:1    0    1G  0 part  
│ └─md127             9:127  0 1023M  0 raid1 /boot
└─sda2                8:2    0    7G  0 part  
  └─md126             9:126  0    7G  0 raid1 
    ├─centosvg-root 253:0    0    5G  0 lvm   /
    └─centosvg-swap 253:1    0    2G  0 lvm   [SWAP]
sdb                   8:16   0    8G  0 disk  
├─sdb1                8:17   0    1G  0 part  
│ └─md127             9:127  0 1023M  0 raid1 /boot
└─sdb2                8:18   0    7G  0 part  
  └─md126             9:126  0    7G  0 raid1 
    ├─centosvg-root 253:0    0    5G  0 lvm   /
    └─centosvg-swap 253:1    0    2G  0 lvm   [SWAP]
sdc                   8:32   0    8G  0 disk  
sdd                   8:48   0    8G  0 disk  
sr0                  11:0    1 1024M  0 rom

mdadm --检查 /dev/sdc /dev/sdd

[root@localhost ~]# mdadm --examine /dev/sdc /dev/sdd
/dev/sdc:
   MBR Magic : aa55
Partition[0] :     16775168 sectors at         2048 (type fd)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :     16775168 sectors at         2048 (type fd)

mdadm --检查 /dev/sdc1 /dev/sdd1

[root@localhost ~]# mdadm --examine /dev/sdc1 /dev/sdd1
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 51a622a9:666c7936:1bf1db43:8029ab06
           Name : localhost:pv01
  Creation Time : Tue Jan  7 13:42:20 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 16764928 sectors (7.99 GiB 8.58 GB)
     Array Size : 8382464 KiB (7.99 GiB 8.58 GB)
    Data Offset : 10240 sectors
   Super Offset : 8 sectors
   Unused Space : before=10160 sectors, after=0 sectors
          State : clean
    Device UUID : f95b50e3:eed41b52:947ddbb4:b42a40d6

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Jan  7 13:43:15 2020
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 9d4c040c - correct
         Events : 25


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 51a622a9:666c7936:1bf1db43:8029ab06
           Name : localhost:pv01
  Creation Time : Tue Jan  7 13:42:20 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 16764928 sectors (7.99 GiB 8.58 GB)
     Array Size : 8382464 KiB (7.99 GiB 8.58 GB)
    Data Offset : 10240 sectors
   Super Offset : 8 sectors
   Unused Space : before=10160 sectors, after=0 sectors
          State : clean
    Device UUID : bcb18234:aab93a6c:80384b09:c547fdb9

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Jan  7 13:43:15 2020
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 40ca1688 - correct
         Events : 25


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

猫 /proc/mdstat

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] 
md125 : active raid1 sdd1[1] sdc1[0]
      8382464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sda2[0] sdb2[1]
      7332864 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sda1[0] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

mdadm --detail /dev/md125

[root@localhost ~]# mdadm --detail /dev/md125
/dev/md125:
           Version : 1.2
     Creation Time : Tue Jan  7 13:42:20 2020
        Raid Level : raid1
        Array Size : 8382464 (7.99 GiB 8.58 GB)
     Used Dev Size : 8382464 (7.99 GiB 8.58 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Jan  7 13:43:15 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:pv01
              UUID : 51a622a9:666c7936:1bf1db43:8029ab06
            Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1

fdisk -l

[root@localhost ~]# fdisk -l

Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000f2ab2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2101247     1049600   fd  Linux raid autodetect
/dev/sda2         2101248    16777215     7337984   fd  Linux raid autodetect

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002519d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2101247     1049600   fd  Linux raid autodetect
/dev/sdb2         2101248    16777215     7337984   fd  Linux raid autodetect

Disk /dev/sdc: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0007bd31

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    16777215     8387584   fd  Linux raid autodetect

Disk /dev/sdd: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00086fef

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    16777215     8387584   fd  Linux raid autodetect

Disk /dev/md127: 1072 MB, 1072693248 bytes, 2095104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md126: 7508 MB, 7508852736 bytes, 14665728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centosvg-root: 5318 MB, 5318377472 bytes, 10387456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centosvg-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md125: 8583 MB, 8583643136 bytes, 16764928 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

df -h

[root@localhost ~]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   484M     0  484M   0% /dev
tmpfs                      496M     0  496M   0% /dev/shm
tmpfs                      496M  6.8M  489M   2% /run
tmpfs                      496M     0  496M   0% /sys/fs/cgroup
/dev/mapper/centosvg-root  5.0G  1.4G  3.7G  27% /
/dev/md127                1020M  164M  857M  17% /boot
tmpfs                      100M     0  100M   0% /run/user/0

显示器

[root@localhost ~]# vgdisplay
  --- Volume group ---
  VG Name               centosvg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               6.99 GiB
  PE Size               4.00 MiB
  Total PE              1790
  Alloc PE / Size       1780 / 6.95 GiB
  Free  PE / Size       10 / 40.00 MiB
  VG UUID               6mKxWb-KOIe-fW1h-zukQ-f7aJ-vxD5-hKAaZG

pvscan

[root@localhost ~]# pvscan
  PV /dev/md126   VG centosvg        lvm2 [6.99 GiB / 40.00 MiB free]
  PV /dev/md125   VG centosvg        lvm2 [7.99 GiB / 7.99 GiB free]
  Total: 2 [14.98 GiB] / in use: 2 [14.98 GiB] / in no VG: 0 [0   ]

lvdisplay

[root@localhost ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/centosvg/swap
  LV Name                swap
  VG Name                centosvg
  LV UUID                o5G6gj-1duf-xIRL-JHoO-ux2f-6oQ8-LIhdtA
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-01-06 13:22:08 -0500
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1

  --- Logical volume ---
  LV Path                /dev/centosvg/root
  LV Name                root
  VG Name                centosvg
  LV UUID                GTbGaF-Wh4J-1zL3-H7r8-p5YZ-kn9F-ayrX8U
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-01-06 13:22:09 -0500
  LV Status              available
  # open                 1
  LV Size                4.95 GiB
  Current LE             1268
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

猫 /run/initramfs/rdsosreport.txt

猫 /run/initramfs/rdsosreport.txt

谢谢!=D

[Refs.: https://4fasters.com.br/2017/11/12/lpic-2-o-que-e-e-para-que-serve-o-dracut/ , https://unix.stackexchange.com/a/152249/61742 , https://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch-p2 , https://www.howtoforge.com/setting-up-lvm-on-top-of-software-raid1-rhel-fedora , https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup , https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/ ]

centos boot
  • 1 1 个回答
  • 848 Views

1 个回答

  • Voted
  1. Best Answer
    Eduardo Lucio
    2020-01-09T09:55:01+08:002020-01-09T09:55:01+08:00

    The problem was occurring for the reasons below...

    The dracut documentation implies that any md raid arrays should be automatically assembled, and that the "rd.md.uuid" parameter should only be used if you only want certain arrays assembled as part of the boot process.

    It seems that in reality, the arrays are not assembled automatically, and are in fact only assembled when the "rd.md.uuid" parameter is set (for each array that needs to be assembled). It could be that since the "rd.lvm.lv" parameter was already set, that it somehow interfered with "md", but I don't have the time to test that.

    In short, adding rd.md.uuid parameters for both of my arrays to the "GRUB_CMDLINE_LINUX" variable in "/etc/default/grub", and then regenerating the grub config fixed the issue for me.

    That is, to the boot works the new array (new "RAID 1") needs to be entered in the "GRUB_CMDLINE_LINUX" parameter since it will be part of "root" ("/").

    For more details, see the section Save our "RAID 1" configuration and adjust CentOS boot in the complete procedure below.

    [Refs.: https://4fasters.com.br/2017/11/12/lpic-2-o-que-e-e-para-que-serve-o-dracut/ , https://forums.centos.org/viewtopic.php?f=47&t=49541#p256406 , https://forums.centos.org/viewtopic.php?f=47&t=51937&start=10#p220414 , https://forums.centos.org/viewtopic.php?t=72667 , https://unix.stackexchange.com/a/152249/61742 , https://unix.stackexchange.com/a/267773/61742 , https://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch-p2 , https://www.howtoforge.com/setting-up-lvm-on-top-of-software-raid1-rhel-fedora , https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup , https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/ ]


    Extend LVM ("root", "/") over RAID 1

    • Format hard drives

    After physically adding the two new disks run the command below to list all disks/ devices (including RAID subsystems)...

    lsblk
    

    NOTE: In our case the devices will be called "sdc" and "sdd" and by default will be in the paths "/dev/sdc" and "/dev/sdd" respectively.

    Run the following 2 commands to make new MBR partition table on the two added hard drives...

    parted /dev/sdc mklabel msdos
    parted /dev/sdd mklabel msdos
    

    IMPORTANT: Any data that may be on both disks will be destroyed.

    Reload "fstab"...

    mount -a
    

    Use the "fdisk" command to create a new partition on each drive and format them as a "Linux raid autodetect" file system. First do this on "/dev/sdc"...

    fdisk /dev/sdc
    

    Follow these instructions...

    • Type "n" to create a new partition;
    • Type "p" to select primary partition;
    • Type "1" to create /dev/sdb1;
    • Press Enter to choose the default first sector;
    • Press Enter to choose the default last sector. This partition will span across the entire drive;
    • We need to change the partition type, so type "t";
    • Enter "fd" to set partition type to "Linux raid autodetect";
    • Type "w" to apply the above changes.

    NOTE: Follow the same instruction to create a Linux raid autodetect partition on "/dev/sdd".

    • Create RAID 1 logical drive

    Execute the following command to create the "RAID 1"...

    IMPORTANT:

    • For the name of the "array" we will assign "/dev/md/pv01" since the highest order identified for these devices in our case is "pv00" (note the output of the command "mdadm --detail --scan"). The array name "/dev/md/pv01" will also be a symbolic path to a "/dev/md12X" device. This path is automatically generated by "mdadm" (command below). The path "/dev/md12X" represents a new device, which in turn represents the devices "/dev/sdc1" and "/dev/sdd1" ("RAID 1"). You can later confirm what the device name is actually ("/dev/md125", "/dev/md126", "/dev/md127", etc ...) with the ls /dev/md* command or the readlink -f /dev/md/pv01 command;
    • The "array" name "/dev/md/pv01" follows the current naming "pattern" used by CentOS. We recommend using it.
    [root@localhost ~]# mdadm --create /dev/md/pv01 --name=pv01 --level=mirror --bitmap=internal --consistency-policy=bitmap --raid-devices=2 /dev/sdc1 /dev/sdd1
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md/pv01 started.
    

    Create the PV (Physical Volumes) to extent our LVM...

    [root@localhost ~]# pvcreate /dev/md/pv01
      Physical volume "/dev/md/pv01" successfully created.
    

    We extend the "centosvg" volume group by adding in the PV (Physical Volumes) of "/dev/md/pv01" ("RAID 1") which we created using the "pvcreate" command just above...

    TIP: To find out the name of the target VG (Volume Group) use the command "vgdisplay" observing the value of the attribute "VG Name" which in our case is "centosvg".

    [root@localhost ~]# vgextend centosvg /dev/md/pv01
      Volume group "centosvg" successfully extended
    

    Increase the LV (Logical Volume) with the "lvextend" command over our new PV (Physical Volumes) "/dev/md/pv01"...

    TIP: To find out the target Logical Volume (LV) path use the "lvdisplay" command looking at the value of the "LV Path" attribute which in our case is "/dev/centosvg/root".

    [root@localhost ~]# lvextend /dev/centosvg/root /dev/md/pv01
      Size of logical volume centosvg/root changed from 4.95 GiB (1268 extents) to <12.95 GiB (3314 extents).
      Logical volume centosvg/root successfully resized.
    

    Resize the file system inside "/dev/centosvg/root" LV (Logical Volume) using the "xfs_growfs" command in order to make use of the new space...

    TIP: Use the same path as the LV (Logical Volume) used above that in our case is "/dev/centosvg/root".

    [root@localhost ~]# xfs_growfs /dev/centosvg/root
    meta-data=/dev/mapper/centosvg-root isize=512    agcount=4, agsize=324608 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0 spinodes=0
    data     =                       bsize=4096   blocks=1298432, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal               bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    data blocks changed from 1298432 to 3393536
    
    • Save our "RAID 1" configuration and adjust CentOS boot

    Update your boot kernel configuration to match the current state of your system.

    Run the command...

    [root@localhost ~]# mdadm --detail --scan
    ARRAY /dev/md/boot metadata=1.2 name=localhost:boot UUID=50ed66ef:6fb373da:5690ac4b:4fb82a45
    ARRAY /dev/md/pv00 metadata=1.2 name=localhost:pv00 UUID=283a4a43:43c85816:55c6adf0:ddcbfb2b
    ARRAY /dev/md/pv01 metadata=1.2 name=localhost.localdomain:pv01 UUID=e5feec81:20d5e154:9a1e2dce:75a03c71
    

    ... and look at the line containing the array/device "/dev/md/pv01" (our case).

    Open the file "/etc/mdadm.conf"...

    vi /etc/mdadm.conf
    

    ... and at the end add a line as below...

    MODEL

    ARRAY [NEW_ARRAY_NAME] level=raid1 num-devices=2 UUID=[NEW_ARRAY_UUID]
    

    例子

    ARRAY /dev/md/pv01 level=raid1 num-devices=2 UUID=e5feec81:20d5e154:9a1e2dce:75a03c71
    

    打开文件“/etc/default/grub”...

    vi /etc/default/grub
    

    ...并查找参数“GRUB_CMDLINE_LINUX”。

    在参数“GRUB_CMDLINE_LINUX”的值中,寻找另一个代表“根”分区的参数“rd.lvm.lv”,如下所示...

    模型

    rd.lvm.lv=[VG_NAME]/[LV_NAME]
    

    例子

    rd.lvm.lv=centosvg/root
    

    ...在此“rd.lvm.lv”中再添加一个“rd.md.uuid”-在这种情况下与上面使用的“[NEW_ARRAY_UUID]”相同-如下...

    模型

    GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=[LV_NAME] rd.md.uuid=283a4a43:43c85816:55c6adf0:ddcbfb2b rd.md.uuid=50ed66ef:6fb373da:5690ac4b:4fb82a45 rd.md.uuid=[NEW_ARRAY_UUID] rd.lvm.lv=centosvg/swap rhgb quiet"
    

    例子

    GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centosvg/root rd.md.uuid=283a4a43:43c85816:55c6adf0:ddcbfb2b rd.md.uuid=50ed66ef:6fb373da:5690ac4b:4fb82a45 rd.md.uuid=e5feec81:20d5e154:9a1e2dce:75a03c71 rd.lvm.lv=centosvg/swap rhgb quiet"
    

    更新 GRUB 配置,使其了解新设备...

    grub2-mkconfig -o /boot/grub2/grub.cfg
    

    运行上述命令后,您应该运行以下命令以生成新的“initramfs”映像...

    dracut -fv
    

    终于重启...

    reboot
    

    重要提示:虽然重新启动对于该过程的工作不是强制性的,但我们建议您这样做以检查可能的故障。

    • 0

相关问题

  • systemctl 命令在 RHEL 6 中不起作用

  • 为什么我的交换机没有从指定的池中获取地址

  • 在 CentOS7 GNOME 的 Applications-menu 选项卡中创建自定义菜单

  • 克隆的 SSD 无法启动并打印奇怪的线条

  • 奇怪的路由器与centos 6一起工作[关闭]

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    模块 i915 可能缺少固件 /lib/firmware/i915/*

    • 3 个回答
  • Marko Smith

    无法获取 jessie backports 存储库

    • 4 个回答
  • Marko Smith

    如何将 GPG 私钥和公钥导出到文件

    • 4 个回答
  • Marko Smith

    我们如何运行存储在变量中的命令?

    • 5 个回答
  • Marko Smith

    如何配置 systemd-resolved 和 systemd-networkd 以使用本地 DNS 服务器来解析本地域和远程 DNS 服务器来解析远程域?

    • 3 个回答
  • Marko Smith

    dist-upgrade 后 Kali Linux 中的 apt-get update 错误 [重复]

    • 2 个回答
  • Marko Smith

    如何从 systemctl 服务日志中查看最新的 x 行

    • 5 个回答
  • Marko Smith

    Nano - 跳转到文件末尾

    • 8 个回答
  • Marko Smith

    grub 错误:你需要先加载内核

    • 4 个回答
  • Marko Smith

    如何下载软件包而不是使用 apt-get 命令安装它?

    • 7 个回答
  • Martin Hope
    user12345 无法获取 jessie backports 存储库 2019-03-27 04:39:28 +0800 CST
  • Martin Hope
    Carl 为什么大多数 systemd 示例都包含 WantedBy=multi-user.target? 2019-03-15 11:49:25 +0800 CST
  • Martin Hope
    rocky 如何将 GPG 私钥和公钥导出到文件 2018-11-16 05:36:15 +0800 CST
  • Martin Hope
    Evan Carroll systemctl 状态显示:“状态:降级” 2018-06-03 18:48:17 +0800 CST
  • Martin Hope
    Tim 我们如何运行存储在变量中的命令? 2018-05-21 04:46:29 +0800 CST
  • Martin Hope
    Ankur S 为什么 /dev/null 是一个文件?为什么它的功能不作为一个简单的程序来实现? 2018-04-17 07:28:04 +0800 CST
  • Martin Hope
    user3191334 如何从 systemctl 服务日志中查看最新的 x 行 2018-02-07 00:14:16 +0800 CST
  • Martin Hope
    Marko Pacak Nano - 跳转到文件末尾 2018-02-01 01:53:03 +0800 CST
  • Martin Hope
    Kidburla 为什么真假这么大? 2018-01-26 12:14:47 +0800 CST
  • Martin Hope
    Christos Baziotis 在一个巨大的(70GB)、一行、文本文件中替换字符串 2017-12-30 06:58:33 +0800 CST

热门标签

linux bash debian shell-script text-processing ubuntu centos shell awk ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve