PERGUNTA:
O CentOS 7 não inicializa depois que estendemos um Grupo de Volume (VG) sobre RAID 1 usando outro RAID 1. O processo que usamos é demonstrado em PROCEDURE: Extend LVM ("root", "/") sobre RAID 1 . O que está errado e/ou faltando no processo que demonstramos?
CONTEXTO:
Estamos tentando estender um Grupo de Volume (VG) em dois discos RAID 1 (software) usando dois outros discos RAID 1 (software).
PROBLEMA:
O CentOS 7 não inicia depois que estendemos um VG (Volume Group).
PROCEDIMENTO: Estenda o LVM ("root", "/") sobre RAID 1
- Formatar discos rígidos
Execute os 2 comandos a seguir para criar uma nova tabela de partição MBR nos dois discos rígidos adicionados...
parted /dev/sdc mklabel msdos
parted /dev/sdd mklabel msdos
Recarregue "fstab"...
mount -a
Use o comando fdisk para criar uma nova partição em cada unidade e formatá-las como um sistema de arquivos de detecção automática de raid do Linux. Primeiro faça isso em /dev/sdc.
fdisk /dev/sdc
Siga estas instruções...
- Digite "n" para criar uma nova partição;
- Digite "p" para selecionar a partição primária;
- Digite "1" para criar /dev/sdb1;
- Pressione Enter para escolher o primeiro setor padrão;
- Pressione Enter para escolher o último setor padrão. Essa partição se estenderá por toda a unidade;
- Digite "t" e digite "fd" para definir o tipo de partição para detecção automática de raid do Linux;
- Digite "w" para aplicar as alterações acima.
NOTA: Siga a mesma instrução para criar uma partição de detecção automática de raid do Linux em "/dev/sdd".
Agora temos dois dispositivos raid "/dev/sdc1" e "/dev/sdd1".
- Criar unidade lógica RAID 1
Execute o seguinte comando para criar RAID 1...
[root@localhost ~]# mdadm --create /dev/md125 --homehost=localhost --name=pv01 --level=mirror --bitmap=internal --consistency-policy=bitmap --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md125 started.
Aumentando o volume lógico
[root@localhost ~]# pvcreate /dev/md125
Physical volume "/dev/md125" successfully created.
Estendemos o grupo de volumes "centosvg" adicionando o volume físico de "/dev/md125" ("RAID 1") que criamos usando o comando "pvcreate" pouco antes...
[root@localhost ~]# vgextend centosvg /dev/md125
Volume group "centosvg" successfully extended
Aumente o volume lógico com o comando "lvextend" - estará pegando nosso volume lógico original e estendendo-o sobre nosso novo volume de disco/partição/físico ("RAID 1") de "/dev/md125"...
[root@localhost ~]# lvextend /dev/centosvg/root /dev/md125
Size of logical volume centosvg/root changed from 4.95 GiB (1268 extents) to <12.95 GiB (3314 extents).
Logical volume centosvg/root successfully resized.
Redimensione o sistema de arquivos usando o comando "xfs_growfs" para usar este espaço...
[root@localhost ~]# xfs_growfs /dev/centosvg/root
meta-data=/dev/mapper/centosvg-root isize=512 agcount=4, agsize=324608 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=1298432, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1298432 to 3393536
- Salve nossa configuração RAID1
Este comando atualiza a configuração do kernel de inicialização para corresponder ao estado atual do seu sistema...
mdadm --detail --scan > /tmp/mdadm.conf
\cp -v /tmp/mdadm.conf /etc/mdadm.conf
Atualize a configuração do GRUB para que ele saiba sobre os novos dispositivos...
grub2-mkconfig -o "$(readlink -e /etc/grub2.cfg)"
Você deve executar o seguinte comando para gerar uma nova imagem "initramfs" após executar o comando acima...
dracut -fv
ERRO:
INFRAESTRUTURA/OUTRAS INFORMAÇÕES:
lsblk
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sda2 8:2 0 7G 0 part
└─md126 9:126 0 7G 0 raid1
├─centosvg-root 253:0 0 5G 0 lvm /
└─centosvg-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 8G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 7G 0 part
└─md126 9:126 0 7G 0 raid1
├─centosvg-root 253:0 0 5G 0 lvm /
└─centosvg-swap 253:1 0 2G 0 lvm [SWAP]
sdc 8:32 0 8G 0 disk
sdd 8:48 0 8G 0 disk
sr0 11:0 1 1024M 0 rom
mdadm --examinar /dev/sdc /dev/sdd
[root@localhost ~]# mdadm --examine /dev/sdc /dev/sdd
/dev/sdc:
MBR Magic : aa55
Partition[0] : 16775168 sectors at 2048 (type fd)
/dev/sdd:
MBR Magic : aa55
Partition[0] : 16775168 sectors at 2048 (type fd)
mdadm --examinar /dev/sdc1 /dev/sdd1
[root@localhost ~]# mdadm --examine /dev/sdc1 /dev/sdd1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 51a622a9:666c7936:1bf1db43:8029ab06
Name : localhost:pv01
Creation Time : Tue Jan 7 13:42:20 2020
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 16764928 sectors (7.99 GiB 8.58 GB)
Array Size : 8382464 KiB (7.99 GiB 8.58 GB)
Data Offset : 10240 sectors
Super Offset : 8 sectors
Unused Space : before=10160 sectors, after=0 sectors
State : clean
Device UUID : f95b50e3:eed41b52:947ddbb4:b42a40d6
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jan 7 13:43:15 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 9d4c040c - correct
Events : 25
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 51a622a9:666c7936:1bf1db43:8029ab06
Name : localhost:pv01
Creation Time : Tue Jan 7 13:42:20 2020
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 16764928 sectors (7.99 GiB 8.58 GB)
Array Size : 8382464 KiB (7.99 GiB 8.58 GB)
Data Offset : 10240 sectors
Super Offset : 8 sectors
Unused Space : before=10160 sectors, after=0 sectors
State : clean
Device UUID : bcb18234:aab93a6c:80384b09:c547fdb9
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jan 7 13:43:15 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 40ca1688 - correct
Events : 25
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
cat /proc/mdstat
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : active raid1 sdd1[1] sdc1[0]
8382464 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : active raid1 sda2[0] sdb2[1]
7332864 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sda1[0] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
mdadm --detail /dev/md125
[root@localhost ~]# mdadm --detail /dev/md125
/dev/md125:
Version : 1.2
Creation Time : Tue Jan 7 13:42:20 2020
Raid Level : raid1
Array Size : 8382464 (7.99 GiB 8.58 GB)
Used Dev Size : 8382464 (7.99 GiB 8.58 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jan 7 13:43:15 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : localhost:pv01
UUID : 51a622a9:666c7936:1bf1db43:8029ab06
Events : 25
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
fdisk -l
[root@localhost ~]# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000f2ab2
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2101247 1049600 fd Linux raid autodetect
/dev/sda2 2101248 16777215 7337984 fd Linux raid autodetect
Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002519d
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 2101247 1049600 fd Linux raid autodetect
/dev/sdb2 2101248 16777215 7337984 fd Linux raid autodetect
Disk /dev/sdc: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0007bd31
Device Boot Start End Blocks Id System
/dev/sdc1 2048 16777215 8387584 fd Linux raid autodetect
Disk /dev/sdd: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00086fef
Device Boot Start End Blocks Id System
/dev/sdd1 2048 16777215 8387584 fd Linux raid autodetect
Disk /dev/md127: 1072 MB, 1072693248 bytes, 2095104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md126: 7508 MB, 7508852736 bytes, 14665728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centosvg-root: 5318 MB, 5318377472 bytes, 10387456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centosvg-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md125: 8583 MB, 8583643136 bytes, 16764928 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
df -h
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 496M 6.8M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centosvg-root 5.0G 1.4G 3.7G 27% /
/dev/md127 1020M 164M 857M 17% /boot
tmpfs 100M 0 100M 0% /run/user/0
vgdisplay
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name centosvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 6.99 GiB
PE Size 4.00 MiB
Total PE 1790
Alloc PE / Size 1780 / 6.95 GiB
Free PE / Size 10 / 40.00 MiB
VG UUID 6mKxWb-KOIe-fW1h-zukQ-f7aJ-vxD5-hKAaZG
pvscan
[root@localhost ~]# pvscan
PV /dev/md126 VG centosvg lvm2 [6.99 GiB / 40.00 MiB free]
PV /dev/md125 VG centosvg lvm2 [7.99 GiB / 7.99 GiB free]
Total: 2 [14.98 GiB] / in use: 2 [14.98 GiB] / in no VG: 0 [0 ]
lv display
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centosvg/swap
LV Name swap
VG Name centosvg
LV UUID o5G6gj-1duf-xIRL-JHoO-ux2f-6oQ8-LIhdtA
LV Write Access read/write
LV Creation host, time localhost, 2020-01-06 13:22:08 -0500
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centosvg/root
LV Name root
VG Name centosvg
LV UUID GTbGaF-Wh4J-1zL3-H7r8-p5YZ-kn9F-ayrX8U
LV Write Access read/write
LV Creation host, time localhost, 2020-01-06 13:22:09 -0500
LV Status available
# open 1
LV Size 4.95 GiB
Current LE 1268
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
cat /run/initramfs/rdsosreport.txt
Obrigada! =D
[Refs.: https://4fasters.com.br/2017/11/12/lpic-2-o-que-e-e-para-que-serve-o-dracut/ , https://unix.stackexchange.com/a/152249/61742 , https://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch-p2 , https://www.howtoforge.com/setting-up-lvm-on-top-of-software-raid1-rhel-fedora , https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup , https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/ ]
The problem was occurring for the reasons below...
That is, to the boot works the new array (new "RAID 1") needs to be entered in the "GRUB_CMDLINE_LINUX" parameter since it will be part of "root" ("/").
For more details, see the section Save our "RAID 1" configuration and adjust CentOS boot in the complete procedure below.
[Refs.: https://4fasters.com.br/2017/11/12/lpic-2-o-que-e-e-para-que-serve-o-dracut/ , https://forums.centos.org/viewtopic.php?f=47&t=49541#p256406 , https://forums.centos.org/viewtopic.php?f=47&t=51937&start=10#p220414 , https://forums.centos.org/viewtopic.php?t=72667 , https://unix.stackexchange.com/a/152249/61742 , https://unix.stackexchange.com/a/267773/61742 , https://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch-p2 , https://www.howtoforge.com/setting-up-lvm-on-top-of-software-raid1-rhel-fedora , https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup , https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/ ]
Extend LVM ("root", "/") over RAID 1
After physically adding the two new disks run the command below to list all disks/ devices (including RAID subsystems)...
NOTE: In our case the devices will be called "sdc" and "sdd" and by default will be in the paths "/dev/sdc" and "/dev/sdd" respectively.
Run the following 2 commands to make new MBR partition table on the two added hard drives...
IMPORTANT: Any data that may be on both disks will be destroyed.
Reload "fstab"...
Use the "fdisk" command to create a new partition on each drive and format them as a "Linux raid autodetect" file system. First do this on "/dev/sdc"...
Follow these instructions...
NOTE: Follow the same instruction to create a Linux raid autodetect partition on "/dev/sdd".
Execute the following command to create the "RAID 1"...
Create the PV (Physical Volumes) to extent our LVM...
We extend the "centosvg" volume group by adding in the PV (Physical Volumes) of "/dev/md/pv01" ("RAID 1") which we created using the "pvcreate" command just above...
TIP: To find out the name of the target VG (Volume Group) use the command "vgdisplay" observing the value of the attribute "VG Name" which in our case is "centosvg".
Increase the LV (Logical Volume) with the "lvextend" command over our new PV (Physical Volumes) "/dev/md/pv01"...
TIP: To find out the target Logical Volume (LV) path use the "lvdisplay" command looking at the value of the "LV Path" attribute which in our case is "/dev/centosvg/root".
Resize the file system inside "/dev/centosvg/root" LV (Logical Volume) using the "xfs_growfs" command in order to make use of the new space...
TIP: Use the same path as the LV (Logical Volume) used above that in our case is "/dev/centosvg/root".
Update your boot kernel configuration to match the current state of your system.
Run the command...
... and look at the line containing the array/device "/dev/md/pv01" (our case).
Open the file "/etc/mdadm.conf"...
... and at the end add a line as below...
MODEL
EXEMPLO
Abra o arquivo "/etc/default/grub"...
... e procure o parâmetro "GRUB_CMDLINE_LINUX".
No valor do parâmetro "GRUB_CMDLINE_LINUX" procure outro parâmetro "rd.lvm.lv" que represente a partição "root" conforme abaixo...
MODELO
EXEMPLO
... adicionando a este "rd.lvm.lv" mais um "rd.md.uuid" - que neste caso é o mesmo que "[NEW_ARRAY_UUID]" usado acima - como abaixo ...
MODELO
EXEMPLO
Atualize a configuração do GRUB para que ele saiba sobre os novos dispositivos...
Você deve executar o seguinte comando para gerar uma nova imagem "initramfs" após executar o comando acima...
Finalmente reinicie...
IMPORTANTE: Embora a reinicialização não seja obrigatória para que o processo funcione, recomendamos fazê-lo para verificar possíveis falhas.