AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[mdadm](server)

Martin Hope
Pro West
Asked: 2025-01-30 20:25:27 +0800 CST

即使添加新磁盘后,RAID1 NAS 驱动器仍报告设备上没有剩余空间

  • 7

我的 NAS 驱动器出现问题,报告设备上没有剩余空间。

经过检查,我发现mdadm报告称文件系统降级是因为两个磁盘中的一个发生故障。我更换了磁盘并重建了阵列,重建过程在 24 小时后完成。

df等的所有输出都mdadm表明文件系统是健康的,并且应该有 1.6 TB 的可用空间。

但是尝试保存一个文件,即使是一个小文件也会导致设备上没有剩余空间。

诊断输出如下。RAID1 文件系统安装在 上/mnt/pools/A/A0。

空间检查:

df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 50M  3.0M   48M   6% /
/dev/root.old         6.5M  2.1M  4.4M  33% /initrd
none                   50M  3.0M   48M   6% /
/dev/md0_vg/BFDlv     4.0G  624M  3.2G  17% /boot
/dev/loop0            592M  538M   54M  91% /mnt/apps
/dev/loop1            4.9M  2.2M  2.5M  47% /etc
/dev/loop2            260K  260K     0 100% /oem
tmpfs                 122M     0  122M   0% /mnt/apps/lib/init/rw
tmpfs                 122M     0  122M   0% /dev/shm
/dev/mapper/md0_vg-vol1
                       16G  1.5G   15G  10% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
                      3.7T  2.0T  1.7T  55% /mnt/pools/A/A0

i 节点检查:

df -ih
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
rootfs                   31K     565     30K    2% /
/dev/root.old           1.7K     130    1.6K    8% /initrd
none                     31K     565     30K    2% /
/dev/md0_vg/BFDlv       256K      20    256K    1% /boot
/dev/loop0               25K     25K      11  100% /mnt/apps
/dev/loop1              1.3K    1.1K     152   88% /etc
/dev/loop2                21      21       0  100% /oem
tmpfs                    31K       4     31K    1% /mnt/apps/lib/init/rw
tmpfs                    31K       1     31K    1% /dev/shm
/dev/mapper/md0_vg-vol1
                         17M    9.7K     16M    1% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
                        742M    2.6M    739M    1% /mnt/pools/A/A0

RAID 状态:

mdadm --detail /dev/md1
/dev/md1:
        Version : 01.00
  Creation Time : Mon Mar  7 08:45:49 2011
     Raid Level : raid1
     Array Size : 3886037488 (3706.01 GiB 3979.30 GB)
  Used Dev Size : 7772074976 (7412.03 GiB 7958.60 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Jan 30 03:16:36 2025
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : ix2-200-DC386F:1
           UUID : 8a192f2c:9829df88:a6961d81:20478f62
         Events : 365739

    Number   Major   Minor   RaidDevice State
       3       8       18        0      active sync   /dev/sdb2
       2       8        2        1      active sync   /dev/sda2

分区:

sfdisk -l /dev/sda

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util sfdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 486401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1          0+      -       0- 2147483647+  ee  EFI GPT
        start: (c,h,s) expected (0,0,2) found (0,0,1)
/dev/sda2          0       -       0          0    0  Empty
/dev/sda3          0       -       0          0    0  Empty
/dev/sda4          0       -       0          0    0  Empty



sfdisk -l /dev/sdb

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util sfdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 486401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+      -       0- 2147483647+  ee  EFI GPT
        start: (c,h,s) expected (0,0,2) found (0,0,1)
/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty

pvs、lvs、vgs 输出:

pvs
  PV         VG          Fmt  Attr PSize  PFree
  /dev/md0   md0_vg      lvm2 a-   20.01G    0 
  /dev/md1   5244dd0f_vg lvm2 a-    3.62T    0 


lvs
  LV         VG          Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lv58141b0d 5244dd0f_vg -wi-ao  3.62T                                      
  BFDlv      md0_vg      -wi-ao  4.00G                                      
  vol1       md0_vg      -wi-ao 16.01G 


vgs
  VG          #PV #LV #SN Attr   VSize  VFree
  5244dd0f_vg   1   1   0 wz--n-  3.62T    0 
  md0_vg        1   2   0 wz--n- 20.01G    0 

lvdisplay、pvdisplay 和 vgdisplay 输出:

lvdisplay 
  --- Logical volume ---
  LV Name                /dev/5244dd0f_vg/lv58141b0d
  VG Name                5244dd0f_vg
  LV UUID                hLUJyo-C8ge-1SRc-gvdg-dlLn-5A7e-D55XeO
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.62 TB
  Current LE             948739
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Name                /dev/md0_vg/BFDlv
  VG Name                md0_vg
  LV UUID                N48AUD-nucp-gP18-wmQi-Ym12-3G1L-7wE1jd
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.00 GB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                /dev/md0_vg/vol1
  VG Name                md0_vg
  LV UUID                73CtS1-b4KB-cLrG-hgUz-Qsli-Mlte-QhaUzy
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                16.01 GB
  Current LE             4098
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               5244dd0f_vg
  PV Size               3.62 TB / not usable 2.30 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              948739
  Free PE               0
  Allocated PE          948739
  PV UUID               mdm9UZ-dhcm-T26Z-LRAB-Pevo-4y7t-OHRDRr
   
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               md0_vg
  PV Size               20.01 GB / not usable 1.06 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              5122
  Free PE               0
  Allocated PE          5122
  PV UUID               AGH7Ci-jGbF-bLsB-pMKr-IttE-7GtR-e5Rf9k


vgdisplay 
  --- Volume group ---
  VG Name               5244dd0f_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.62 TB
  PE Size               4.00 MB
  Total PE              948739
  Alloc PE / Size       948739 / 3.62 TB
  Free  PE / Size       0 / 0   
  VG UUID               FB2tzp-8Gr2-6Dlj-9Dck-Tyc4-Gxx5-HHIsBD
   
  --- Volume group ---
  VG Name               md0_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.01 GB
  PE Size               4.00 MB
  Total PE              5122
  Alloc PE / Size       5122 / 20.01 GB
  Free  PE / Size       0 / 0   
  VG UUID               EA3tJR-nVdm-0Dcf-YtBE-t1Qj-peHc-Sh0zXe
   

我不知道下一步该做什么。

更新:

该文件系统下的目录数为 280,286。

$ cd /mnt/pools/A/A0/
$ ls
Backups  Documents  Movies  Music  Pictures  QuikTransfer  stls_userdata  TimeMachine
$ find . -type d | wc -l
280286

Update2-mount 命令输出:

mount
rootfs on / type rootfs (rw)
/dev/root.old on /initrd type ext2 (rw,relatime,errors=continue)
none on / type tmpfs (rw,relatime,size=51200k,nr_inodes=31083)
/dev/md0_vg/BFDlv on /boot type ext2 (rw,noatime,errors=continue)
/dev/loop0 on /mnt/apps type ext2 (ro,relatime)
/dev/loop1 on /etc type ext2 (rw,sync,noatime)
/dev/loop2 on /oem type cramfs (ro,relatime)
proc on /proc type proc (rw,relatime)
none on /proc/bus/usb type usbfs (rw,relatime)
none on /proc/fs/nfsd type nfsd (rw,relatime)
none on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620)
tmpfs on /mnt/apps/lib/init/rw type tmpfs (rw,nosuid,relatime,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
/dev/mapper/md0_vg-vol1 on /mnt/system type xfs (rw,noatime,attr2,logbufs=8,noquota)
/dev/mapper/5244dd0f_vg-lv58141b0d on /mnt/pools/A/A0 type xfs (rw,noatime,attr2,nobarrier,logbufs=8,noquota)

失败的示例命令:

mount > /mnt/pools/A/A0/Documents/mount.txt
-sh: /mnt/pools/A/A0/Documents/mount.txt: No space left on device
mdadm
  • 1 个回答
  • 97 Views
Martin Hope
Brian
Asked: 2024-12-01 02:46:14 +0800 CST

mdadm 强制所有驱动器在[重新]同步中使用

  • 7

我有一个 2 磁盘 MD raid1 阵列。其中一个驱动器速度很慢,所以我想更换它。但出于谨慎[1]和安全考虑,我想在移除速度慢的驱动器之前添加新的(第 3 个)驱动器并同步它,因为它速度很慢,但除此之外还可以正常工作并且上面有有效数据,我想避免在不得不这样做之前降低阵列的性能。

所以我添加了第三个磁盘,它开始同步,到目前为止一切都很好。但它只使用现有 2 个驱动器中的 1 个作为同步源,你知道吗,根据我所说的 50/50/90 规则[2],它使用慢速磁盘作为同步的 (唯一) 源。我可以看到它 (仅) 使用sar的慢速磁盘:

01:21:55 PM       tps     rkB/s     wkB/s     dkB/s   areq-sz    aqu-sz     await     %util DEV
01:22:00 PM     91.60      0.00  45147.30      0.00    492.87      0.14      1.48     11.26 sdb
01:22:00 PM      0.40      0.00      1.70      0.00      4.25      0.13    314.00     48.08 sde
01:22:00 PM     91.60  45145.60      1.70      0.00    492.87      3.41     37.26     88.30 sdd
01:22:00 PM      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00 md0

sdd是较慢的磁盘。如果改为从 MD 读取sde,速度将大约是 4 倍:

# dd if=/dev/sde of=/dev/null bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 23.1946 s, 226 MB/s

坦率地说,我很惊讶 MD 如此简单,并且在重建时它不会读取所有磁盘,正是出于这种原因/情况。

那么,有什么方法可以强制在此同步中使用sde,无论是补充sdd还是替代?

PS 同步速度不受以下因素限制dev.raid.speed_limit_max:

# sysctl -n dev.raid.speed_limit_max
20000000

[1]你可以说我偏执,但你知道,当你的阵列只剩下一个磁盘时,那个磁盘就会发生故障。

[2]如果某件事以某种方式发生的可能性是 50%,那么 90% 的情况下,它都会朝着你不想要的方向发展。

mdadm
  • 1 个回答
  • 25 Views
Martin Hope
HighDraw
Asked: 2023-10-01 18:34:14 +0800 CST

RAID1 驱动器消失,但重新启动后可以重新添加。我应该担心吗?

  • 6

我正在运行 Debian 12,并使用 MD RAID1 阵列(2 个驱动器)来存储我的个人数据(阵列上没有系统文件)。

今天,我收到一封来自 mdadm 的关于 DegradedArray 事件的邮件,当时我的驱动器通常不被使用:

This is an automatically generated mail message from mdadm
running on hostname

A DegradedArray event had been detected on md device /dev/md0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc[0]
      976630464 blocks super 1.2 [2/1] [U_]
      bitmap: 4/8 pages [16KB], 65536KB chunk

unused devices: <none>

/var/log/syslog不包含任何相关内容,但dmesg显示:

[652897.364496] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x40000 action 0x6 frozen
[652897.364512] ata2: SError: { CommWake }
[652897.364520] ata2.00: failed command: FLUSH CACHE EXT
[652897.364525] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 15
                         res 40/00:00:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
[652897.364541] ata2.00: status: { DRDY }
[652897.364549] ata2: hard resetting link
[652902.720479] ata2: found unknown device (class 0)
[652907.364975] ata2: softreset failed (1st FIS failed)
[652907.364988] ata2: hard resetting link
[652912.716814] ata2: found unknown device (class 0)
[652917.365220] ata2: softreset failed (1st FIS failed)
[652917.365233] ata2: hard resetting link
[652922.724814] ata2: found unknown device (class 0)
[652952.365391] ata2: softreset failed (1st FIS failed)
[652952.365406] ata2: limiting SATA link speed to 3.0 Gbps
[652952.365409] ata2: hard resetting link
[652957.420814] ata2: found unknown device (class 0)
[652957.580941] ata2: found unknown device (class 0)
[652957.580966] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[652962.596788] ata2.00: qc timeout after 5000 msecs (cmd 0xec)
[652962.596807] ata2.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[652962.596811] ata2.00: revalidation failed (errno=-5)
[652962.596824] ata2: hard resetting link
[652967.956818] ata2: found unknown device (class 0)
[652972.597225] ata2: softreset failed (1st FIS failed)
[652972.597239] ata2: hard resetting link
[652977.188682] INFO: task md0_raid1:242 blocked for more than 120 seconds.
[652977.188696]       Not tainted 6.1.0-12-amd64 #1 Debian 6.1.52-1
[652977.188703] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[652977.188708] task:md0_raid1       state:D stack:0     pid:242   ppid:2      flags:0x00004000
[652977.188716] Call Trace:
[652977.188719]  <TASK>
[652977.188724]  __schedule+0x351/0xa20
[652977.188736]  schedule+0x5d/0xe0
[652977.188745]  md_super_wait+0x9e/0xd0 [md_mod]
[652977.188770]  ? cpuusage_read+0x10/0x10
[652977.188777]  write_page+0x2b7/0x3c0 [md_mod]
[652977.188801]  ? md_super_wait+0x23/0xd0 [md_mod]
[652977.188824]  md_update_sb.part.0+0x300/0x7e0 [md_mod]
[652977.188847]  ? unregister_md_personality+0x70/0x70 [md_mod]
[652977.188868]  md_check_recovery+0x15a/0x5b0 [md_mod]
[652977.188892]  raid1d+0x8a/0x1990 [raid1]
[652977.188903]  ? update_load_avg+0x7e/0x780
[652977.188910]  ? psi_group_change+0x145/0x360
[652977.188915]  ? sched_clock_local+0xe/0x80
[652977.188920]  ? _raw_spin_unlock+0x15/0x30
[652977.188925]  ? finish_task_switch.isra.0+0x9b/0x300
[652977.188929]  ? __switch_to+0x106/0x410
[652977.188936]  ? __schedule+0x359/0xa20
[652977.188943]  ? unregister_md_personality+0x70/0x70 [md_mod]
[652977.188963]  ? preempt_count_add+0x6a/0xa0
[652977.188966]  ? _raw_spin_lock_irqsave+0x23/0x50
[652977.188970]  ? preempt_count_add+0x6a/0xa0
[652977.188975]  ? unregister_md_personality+0x70/0x70 [md_mod]
[652977.188993]  md_thread+0xaa/0x180 [md_mod]
[652977.189012]  ? cpuusage_read+0x10/0x10
[652977.189017]  kthread+0xe9/0x110
[652977.189023]  ? kthread_complete_and_exit+0x20/0x20
[652977.189028]  ret_from_fork+0x22/0x30
[652977.189036]  </TASK>
[652977.189041] INFO: task jbd2/md0-8:1065 blocked for more than 120 seconds.
[652977.189046]       Not tainted 6.1.0-12-amd64 #1 Debian 6.1.52-1
[652977.189051] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[652977.189056] task:jbd2/md0-8      state:D stack:0     pid:1065  ppid:2      flags:0x00004000
[652977.189061] Call Trace:
[652977.189062]  <TASK>
[652977.189064]  __schedule+0x351/0xa20
[652977.189071]  schedule+0x5d/0xe0
[652977.189076]  md_write_start+0x198/0x2a0 [md_mod]
[652977.189095]  ? cpuusage_read+0x10/0x10
[652977.189100]  raid1_make_request+0xac/0xbaf [raid1]
[652977.189109]  ? iomap_iter+0x78/0x310
[652977.189116]  ? mempool_alloc+0x85/0x1b0
[652977.189122]  ? kmem_cache_alloc+0x148/0x2e0
[652977.189129]  md_handle_request+0x131/0x1e0 [md_mod]
[652977.189149]  __submit_bio+0x89/0x130
[652977.189154]  submit_bio_noacct_nocheck+0x163/0x370
[652977.189159]  ? submit_bio_noacct+0x79/0x4a0
[652977.189163]  jbd2_journal_commit_transaction+0xdb3/0x1a70 [jbd2]
[652977.189187]  ? _raw_spin_unlock+0x15/0x30
[652977.189191]  ? finish_task_switch.isra.0+0x9b/0x300
[652977.189194]  ? __switch_to+0x106/0x410
[652977.189202]  kjournald2+0xa9/0x280 [jbd2]
[652977.189222]  ? cpuusage_read+0x10/0x10
[652977.189227]  ? jbd2_fc_wait_bufs+0xa0/0xa0 [jbd2]
[652977.189246]  kthread+0xe9/0x110
[652977.189250]  ? kthread_complete_and_exit+0x20/0x20
[652977.189256]  ret_from_fork+0x22/0x30
[652977.189263]  </TASK>
[652977.948973] ata2: found unknown device (class 0)
[652982.597296] ata2: softreset failed (1st FIS failed)
[652982.597309] ata2: hard resetting link
[652987.948814] ata2: found unknown device (class 0)
[653017.597291] ata2: softreset failed (1st FIS failed)
[653017.597306] ata2: limiting SATA link speed to 1.5 Gbps
[653017.597309] ata2: hard resetting link
[653022.648471] ata2: found unknown device (class 0)
[653022.808780] ata2: found unknown device (class 0)
[653022.808797] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[653032.996794] ata2.00: qc timeout after 10000 msecs (cmd 0xec)
[653032.996813] ata2.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[653032.996817] ata2.00: revalidation failed (errno=-5)
[653032.996832] ata2: hard resetting link
[653038.348823] ata2: found unknown device (class 0)
[653042.996967] ata2: softreset failed (1st FIS failed)
[653042.996982] ata2: hard resetting link
[653048.348821] ata2: found unknown device (class 0)
[653052.996756] ata2: softreset failed (1st FIS failed)
[653052.996770] ata2: hard resetting link
[653058.348811] ata2: found unknown device (class 0)
[653087.997129] ata2: softreset failed (1st FIS failed)
[653087.997145] ata2: hard resetting link
[653093.057056] ata2: found unknown device (class 0)
[653093.216467] ata2: found unknown device (class 0)
[653093.216484] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[653098.020894] INFO: task kcompactd0:71 blocked for more than 120 seconds.
[653098.020910]       Not tainted 6.1.0-12-amd64 #1 Debian 6.1.52-1
[653098.020918] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[653098.020923] task:kcompactd0      state:D stack:0     pid:71    ppid:2      flags:0x00004000
[653098.020932] Call Trace:
[653098.020935]  <TASK>
[653098.020940]  __schedule+0x351/0xa20
[653098.020952]  ? bit_wait+0x60/0x60
[653098.020959]  schedule+0x5d/0xe0
[653098.020965]  io_schedule+0x42/0x70
[653098.020971]  bit_wait_io+0xd/0x60
[653098.020977]  __wait_on_bit_lock+0x5f/0xa0
[653098.020984]  out_of_line_wait_on_bit_lock+0x91/0xb0
[653098.020991]  ? sugov_init+0x350/0x350
[653098.020997]  __buffer_migrate_folio+0xb8/0x270
[653098.021006]  move_to_new_folio+0x56/0x150
[653098.021013]  migrate_pages+0xc51/0x1480
[653098.021019]  ? isolate_freepages_block+0x410/0x410
[653098.021027]  ? release_freepages+0xc0/0xc0
[653098.021034]  ? do_pages_stat+0x360/0x360
[653098.021041]  compact_zone+0x97e/0xdb0
[653098.021048]  ? sched_clock_local+0xe/0x80
[653098.021053]  ? finish_task_switch.isra.0+0x9b/0x300
[653098.021059]  proactive_compact_node+0x87/0xc0
[653098.021069]  kcompactd+0x34c/0x420
[653098.021075]  ? cpuusage_read+0x10/0x10
[653098.021080]  ? kcompactd_do_work+0x2a0/0x2a0
[653098.021086]  kthread+0xe9/0x110
[653098.021092]  ? kthread_complete_and_exit+0x20/0x20
[653098.021098]  ret_from_fork+0x22/0x30
[653098.021107]  </TASK>
[653098.021117] INFO: task md0_raid1:242 blocked for more than 241 seconds.
[653098.021124]       Not tainted 6.1.0-12-amd64 #1 Debian 6.1.52-1
[653098.021129] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[653098.021134] task:md0_raid1       state:D stack:0     pid:242   ppid:2      flags:0x00004000
[653098.021139] Call Trace:
[653098.021141]  <TASK>
[653098.021143]  __schedule+0x351/0xa20
[653098.021150]  schedule+0x5d/0xe0
[653098.021157]  md_super_wait+0x9e/0xd0 [md_mod]
[653098.021179]  ? cpuusage_read+0x10/0x10
[653098.021184]  write_page+0x2b7/0x3c0 [md_mod]
[653098.021205]  ? md_super_wait+0x23/0xd0 [md_mod]
[653098.021225]  md_update_sb.part.0+0x300/0x7e0 [md_mod]
[653098.021246]  ? unregister_md_personality+0x70/0x70 [md_mod]
[653098.021265]  md_check_recovery+0x15a/0x5b0 [md_mod]
[653098.021286]  raid1d+0x8a/0x1990 [raid1]
[653098.021296]  ? update_load_avg+0x7e/0x780
[653098.021301]  ? psi_group_change+0x145/0x360
[653098.021305]  ? sched_clock_local+0xe/0x80
[653098.021310]  ? _raw_spin_unlock+0x15/0x30
[653098.021314]  ? finish_task_switch.isra.0+0x9b/0x300
[653098.021318]  ? __switch_to+0x106/0x410
[653098.021324]  ? __schedule+0x359/0xa20
[653098.021330]  ? unregister_md_personality+0x70/0x70 [md_mod]
[653098.021349]  ? preempt_count_add+0x6a/0xa0
[653098.021352]  ? _raw_spin_lock_irqsave+0x23/0x50
[653098.021356]  ? preempt_count_add+0x6a/0xa0
[653098.021360]  ? unregister_md_personality+0x70/0x70 [md_mod]
[653098.021379]  md_thread+0xaa/0x180 [md_mod]
[653098.021398]  ? cpuusage_read+0x10/0x10
[653098.021403]  kthread+0xe9/0x110
[653098.021407]  ? kthread_complete_and_exit+0x20/0x20
[653098.021412]  ret_from_fork+0x22/0x30
[653098.021420]  </TASK>
[653098.021424] INFO: task jbd2/md0-8:1065 blocked for more than 241 seconds.
[653098.021430]       Not tainted 6.1.0-12-amd64 #1 Debian 6.1.52-1
[653098.021435] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[653098.021440] task:jbd2/md0-8      state:D stack:0     pid:1065  ppid:2      flags:0x00004000
[653098.021445] Call Trace:
[653098.021446]  <TASK>
[653098.021449]  __schedule+0x351/0xa20
[653098.021455]  schedule+0x5d/0xe0
[653098.021460]  md_write_start+0x198/0x2a0 [md_mod]
[653098.021479]  ? cpuusage_read+0x10/0x10
[653098.021484]  raid1_make_request+0xac/0xbaf [raid1]
[653098.021493]  ? iomap_iter+0x78/0x310
[653098.021500]  ? mempool_alloc+0x85/0x1b0
[653098.021505]  ? kmem_cache_alloc+0x148/0x2e0
[653098.021511]  md_handle_request+0x131/0x1e0 [md_mod]
[653098.021531]  __submit_bio+0x89/0x130
[653098.021536]  submit_bio_noacct_nocheck+0x163/0x370
[653098.021541]  ? submit_bio_noacct+0x79/0x4a0
[653098.021545]  jbd2_journal_commit_transaction+0xdb3/0x1a70 [jbd2]
[653098.021569]  ? _raw_spin_unlock+0x15/0x30
[653098.021573]  ? finish_task_switch.isra.0+0x9b/0x300
[653098.021577]  ? __switch_to+0x106/0x410
[653098.021584]  kjournald2+0xa9/0x280 [jbd2]
[653098.021604]  ? cpuusage_read+0x10/0x10
[653098.021609]  ? jbd2_fc_wait_bufs+0xa0/0xa0 [jbd2]
[653098.021628]  kthread+0xe9/0x110
[653098.021633]  ? kthread_complete_and_exit+0x20/0x20
[653098.021638]  ret_from_fork+0x22/0x30
[653098.021645]  </TASK>
[653124.644468] ata2.00: qc timeout after 30000 msecs (cmd 0xec)
[653124.644485] ata2.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[653124.644493] ata2.00: revalidation failed (errno=-5)
[653124.644498] ata2.00: disable device
[653130.000802] ata2: found unknown device (class 0)
[653134.645444] ata2: softreset failed (1st FIS failed)
[653140.000802] ata2: found unknown device (class 0)
[653144.644791] ata2: softreset failed (1st FIS failed)
[653149.996821] ata2: found unknown device (class 0)
[653179.644808] ata2: softreset failed (1st FIS failed)
[653184.696807] ata2: found unknown device (class 0)
[653184.856889] ata2: found unknown device (class 0)
[653184.856909] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[653184.856943] ata2: EH complete
[653184.856995] sd 1:0:0:0: [sdb] tag#11 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK cmd_age=348s
[653184.857003] sd 1:0:0:0: [sdb] tag#11 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[653184.857013] I/O error, dev sdb, sector 16 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 2
[653184.857028] md: super_written gets error=-5
[653184.857036] md/raid1:md0: Disk failure on sdb, disabling device.
                md/raid1:md0: Operation continuing on 1 devices.
[653185.608789] ata2: found unknown device (class 0)
[653194.896744] ata2: softreset failed (1st FIS failed)
[653200.648480] ata2: found unknown device (class 0)
[653204.896652] ata2: softreset failed (1st FIS failed)
[653210.648810] ata2: found unknown device (class 0)
[653239.896810] ata2: softreset failed (1st FIS failed)
[653239.896826] ata2: limiting SATA link speed to 3.0 Gbps
[653244.928818] ata2: found unknown device (class 0)
[653245.088464] ata2: found unknown device (class 0)
[653245.088484] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[653245.088491] ata2: link online but 1 devices misclassified, device detection might fail
[653245.088528] ata2.00: detaching (SCSI 1:0:0:0)
[653245.152505] sd 1:0:0:0: [sdb] Synchronizing SCSI cache
[653245.832890] ata2: found unknown device (class 0)
[653255.116619] ata2: softreset failed (1st FIS failed)
[653260.868780] ata2: found unknown device (class 0)
[653265.116809] ata2: softreset failed (1st FIS failed)
[653270.868804] ata2: found unknown device (class 0)
[653300.116654] ata2: softreset failed (1st FIS failed)
[653300.116670] ata2: limiting SATA link speed to 3.0 Gbps
[653305.148810] ata2: found unknown device (class 0)
[653305.308456] ata2: found unknown device (class 0)
[653305.308473] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[653305.308480] ata2: link online but 1 devices misclassified, device detection might fail
[653305.312951] sd 1:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[653305.312958] sd 1:0:0:0: [sdb] Stopping disk
[653305.312979] sd 1:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[867105.099901] md: data-check of RAID array md0
[867105.101757] md: md0: data-check done.

我发现该驱动器突然消失了,即lsblk不再blkid列出该驱动器,就好像它没有物理安装一样。

我做了备份,重新启动后,驱动器再次列出,并且可以使用添加sudo mdadm --re-add /dev/md0 /dev/sdd。之后,一切看起来又恢复正常了。例如dmesg显示:

[ 3519.982027] md: recovery of RAID array md0
[ 3575.888503] md: md0: recovery done.

sudo mdadm --detail /dev/md0:

/dev/md0:
           Version : 1.2
     Creation Time : Sat Aug  6 17:58:09 2022
        Raid Level : raid1
        Array Size : 976630464 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Oct  1 12:02:14 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : nas:0  (local to host nas)
              UUID : 6d6bb2a5:d42475de:ce618a52:28bd98bb
            Events : 3719

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       48        1      active sync   /dev/sdd

sudo smartctl -a /dev/sdd:

smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-12-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Skyhawk
Device Model:     ST1000VX005-2EZ102
Serial Number:    Z9CB4EPK
LU WWN Device Id: 5 000c50 0c45a18ab
Firmware Version: CV11
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5900 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5319
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Oct  1 11:54:51 2023 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 130) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x10bb) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   081   063   006    Pre-fail  Always       -       153886026
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       494
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   069   060   045    Pre-fail  Always       -       9994348
  9 Power_On_Hours          0x0032   090   090   000    Old_age   Always       -       9604h+00m+00.000s
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       37
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   065   062   040    Old_age   Always       -       35 (Min/Max 28/35)
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       880
194 Temperature_Celsius     0x0022   035   020   000    Old_age   Always       -       35 (0 20 0 0 0)
195 Hardware_ECC_Recovered  0x001a   003   001   000    Old_age   Always       -       153886026
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

我觉得这个现象很奇怪。我现在的问题是:我应该更换驱动器还是等到有人真正报告 SMART 错误?

mdadm
  • 1 个回答
  • 43 Views
Martin Hope
James
Asked: 2023-10-01 13:44:28 +0800 CST

RAID 10 XFS 的 FIO 随机 128 MB 读取值非常慢

  • 6

我刚刚安装了两个三星 970 EVO Plus 驱动器,并将最大分区配置为 RAID 10(远 2 布局)。问题是FIO仅报告读取速度为 72 MB/s。我使用以下命令来构建数组;

mdadm --create --verbose --level=10 --metadata=1.2 --chunk=512 --raid-devices=2 --layout=f2 /dev/md/MyRAID10Array /dev/nvme0n1p3 /dev/nvme1n1p3

这应该使我的读取速度接近 3500 MB/s 的两倍。我已使用XFS的默认选项对其进行格式化,

mkfs.xfs /dev/md127

磁盘布局如下

fdisk -l /dev/nvme0n1

输出:

Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 970 EVO Plus 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF148851-E4A9-41D2-96C6-418280164BC6

Device            Start        End    Sectors   Size Type
/dev/nvme0n1p1     2048     821247     819200   400M Linux RAID
/dev/nvme0n1p2   821248   67930111   67108864    32G Linux swap
/dev/nvme0n1p3 67930112 1953523711 1885593600 899.1G Linux RAID

MDSTAT报告

md127 :活动 raid10 nvme1n1p3[0] nvme0n1p3 3 942664704 块超级 1.2 512K 块 2 远副本 [2/2] [UU] 位图:0/8 页 [0KB],65536KB 块

但是当我通过 FIO 运行这个时,

; Random read of 128 MB of data

[random-read]
rw=randread
size=128m
directory=/backup_mount_point

我只得到非常糟糕的数字;

random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.35
Starting 1 process
Jobs: 1 (f=1)
random-read: (groupid=0, jobs=1): err= 0: pid=214183: Sun Oct  1 06:28:06 2023
  read: IOPS=18.2k, BW=71.3MiB/s (74.7MB/s)(128MiB/1796msec)
    clat (usec): min=14, max=6880, avg=54.09, stdev=53.44
     lat (usec): min=14, max=6880, avg=54.12, stdev=53.45
    clat percentiles (usec):
     |  1.00th=[   51],  5.00th=[   51], 10.00th=[   51], 20.00th=[   52],
     | 30.00th=[   52], 40.00th=[   53], 50.00th=[   53], 60.00th=[   53],
     | 70.00th=[   54], 80.00th=[   55], 90.00th=[   57], 95.00th=[   58],
     | 99.00th=[   75], 99.50th=[   76], 99.90th=[   78], 99.95th=[   80],
     | 99.99th=[  123]
   bw (  KiB/s): min=70720, max=74920, per=100.00%, avg=73509.33, stdev=2415.69, samples=3
   iops        : min=17680, max=18730, avg=18377.33, stdev=603.92, samples=3
  lat (usec)   : 20=0.03%, 50=0.02%, 100=99.93%, 250=0.02%
  lat (msec)   : 10=0.01%
  cpu          : usr=0.39%, sys=9.25%, ctx=32778, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=32768,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=71.3MiB/s (74.7MB/s), 71.3MiB/s-71.3MiB/s (74.7MB/s-74.7MB/s), io=128MiB (134MB), run=1796-1796msec

Disk stats (read/write):
    md127: ios=30225/0, merge=0/0, ticks=1407/0, in_queue=1407, util=93.46%, aggrios=16384/0, aggrmerge=0/0, aggrticks=784/0, aggrin_queue=784, aggrutil=93.19%
  nvme0n1: ios=16384/0, merge=0/0, ticks=768/0, in_queue=768, util=93.04%
  nvme1n1: ios=16384/0, merge=0/0, ticks=800/0, in_queue=800, util=93.19%

如果我转向顺序(rw=read),它会提高到 1803 MB/s,但仍然不到我预期的三分之一。我在具有 64 GB RAM 的 AMD Ryzen 系统上运行Arch Linux。主板是MSI X570S Edge Wifi Max。

编辑; 在 Batistuta9 的回答之后,我改变了我的测试并得到了明显更快的结果。这是 FIO 摘要,以防您感兴趣;

random-read: (groupid=0, jobs=1): err= 0: pid=143332: Thu Oct  5 23:23:49 2023
  read: IOPS=15.0k, BW=58.7MiB/s (61.5MB/s)(20.0GiB/348908msec)
    clat (usec): min=48, max=2746, avg=66.21, stdev=16.90
     lat (usec): min=48, max=2746, avg=66.24, stdev=16.90
    clat percentiles (usec):
     |  1.00th=[   49],  5.00th=[   50], 10.00th=[   51], 20.00th=[   51],
     | 30.00th=[   52], 40.00th=[   54], 50.00th=[   69], 60.00th=[   72],
     | 70.00th=[   74], 80.00th=[   81], 90.00th=[   88], 95.00th=[   91],
     | 99.00th=[  120], 99.50th=[  130], 99.90th=[  149], 99.95th=[  157],
     | 99.99th=[  184]
   bw (  KiB/s): min=57064, max=61112, per=14.93%, avg=60131.86, stdev=374.32, samples=697
   iops        : min=14266, max=15278, avg=15032.97, stdev=93.58, samples=697
  lat (usec)   : 50=6.97%, 100=90.22%, 250=2.81%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.74%, sys=5.61%, ctx=5246560, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143333: Thu Oct  5 23:23:49 2023
  read: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(20.0GiB/345469msec)
    clat (usec): min=48, max=2828, avg=65.55, stdev=16.90
     lat (usec): min=48, max=2828, avg=65.58, stdev=16.90
    clat percentiles (usec):
     |  1.00th=[   49],  5.00th=[   50], 10.00th=[   51], 20.00th=[   51],
     | 30.00th=[   52], 40.00th=[   53], 50.00th=[   60], 60.00th=[   71],
     | 70.00th=[   73], 80.00th=[   79], 90.00th=[   88], 95.00th=[   90],
     | 99.00th=[  120], 99.50th=[  130], 99.90th=[  149], 99.95th=[  157],
     | 99.99th=[  184]
   bw (  KiB/s): min=59472, max=61528, per=15.08%, avg=60729.74, stdev=344.11, samples=690
   iops        : min=14868, max=15382, avg=15182.43, stdev=86.02, samples=690
  lat (usec)   : 50=6.62%, 100=90.63%, 250=2.75%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.76%, sys=5.63%, ctx=5246568, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143334: Thu Oct  5 23:23:49 2023
  read: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(20.0GiB/366867msec)
    clat (usec): min=48, max=2742, avg=69.63, stdev=16.68
     lat (usec): min=48, max=2742, avg=69.66, stdev=16.69
    clat percentiles (usec):
     |  1.00th=[   50],  5.00th=[   50], 10.00th=[   51], 20.00th=[   52],
     | 30.00th=[   54], 40.00th=[   70], 50.00th=[   72], 60.00th=[   73],
     | 70.00th=[   75], 80.00th=[   86], 90.00th=[   88], 95.00th=[   92],
     | 99.00th=[  123], 99.50th=[  133], 99.90th=[  151], 99.95th=[  159],
     | 99.99th=[  184]
   bw (  KiB/s): min=55232, max=58272, per=14.20%, avg=57191.21, stdev=322.36, samples=733
   iops        : min=13808, max=14568, avg=14297.80, stdev=80.58, samples=733
  lat (usec)   : 50=3.98%, 100=92.88%, 250=3.14%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.74%, sys=5.33%, ctx=5246807, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143335: Thu Oct  5 23:23:49 2023
  read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(20.0GiB/416494msec)
    clat (usec): min=56, max=2742, avg=79.09, stdev=12.05
     lat (usec): min=56, max=2742, avg=79.12, stdev=12.05
    clat percentiles (usec):
     |  1.00th=[   64],  5.00th=[   70], 10.00th=[   71], 20.00th=[   72],
     | 30.00th=[   73], 40.00th=[   73], 50.00th=[   74], 60.00th=[   78],
     | 70.00th=[   86], 80.00th=[   88], 90.00th=[   90], 95.00th=[   95],
     | 99.00th=[  128], 99.50th=[  137], 99.90th=[  155], 99.95th=[  163],
     | 99.99th=[  190]
   bw (  KiB/s): min=48912, max=51584, per=12.51%, avg=50377.18, stdev=326.60, samples=832
   iops        : min=12228, max=12896, avg=12594.28, stdev=81.63, samples=832
  lat (usec)   : 100=96.05%, 250=3.95%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.64%, sys=4.86%, ctx=5247122, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143336: Thu Oct  5 23:23:49 2023
  read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(20.0GiB/416501msec)
    clat (usec): min=57, max=2889, avg=79.08, stdev=12.27
     lat (usec): min=57, max=2889, avg=79.12, stdev=12.27
    clat percentiles (usec):
     |  1.00th=[   64],  5.00th=[   70], 10.00th=[   71], 20.00th=[   72],
     | 30.00th=[   73], 40.00th=[   73], 50.00th=[   74], 60.00th=[   78],
     | 70.00th=[   86], 80.00th=[   88], 90.00th=[   90], 95.00th=[   95],
     | 99.00th=[  128], 99.50th=[  137], 99.90th=[  155], 99.95th=[  163],
     | 99.99th=[  188]
   bw (  KiB/s): min=48640, max=51608, per=12.51%, avg=50376.18, stdev=314.71, samples=832
   iops        : min=12160, max=12902, avg=12594.03, stdev=78.66, samples=832
  lat (usec)   : 100=96.08%, 250=3.92%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.68%, sys=4.83%, ctx=5247298, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143337: Thu Oct  5 23:23:49 2023
  read: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(20.0GiB/415249msec)
    clat (usec): min=14, max=3163, avg=78.85, stdev=12.35
     lat (usec): min=14, max=3163, avg=78.88, stdev=12.35
    clat percentiles (usec):
     |  1.00th=[   60],  5.00th=[   70], 10.00th=[   71], 20.00th=[   72],
     | 30.00th=[   73], 40.00th=[   73], 50.00th=[   74], 60.00th=[   77],
     | 70.00th=[   86], 80.00th=[   88], 90.00th=[   90], 95.00th=[   95],
     | 99.00th=[  128], 99.50th=[  137], 99.90th=[  153], 99.95th=[  161],
     | 99.99th=[  190]
   bw (  KiB/s): min=49104, max=51799, per=12.54%, avg=50526.68, stdev=317.81, samples=830
   iops        : min=12276, max=12949, avg=12631.66, stdev=79.43, samples=830
  lat (usec)   : 20=0.01%, 50=0.04%, 100=96.03%, 250=3.93%, 500=0.01%
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.61%, sys=4.90%, ctx=5246732, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143338: Thu Oct  5 23:23:49 2023
  read: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(20.0GiB/413718msec)
    clat (usec): min=48, max=2834, avg=78.55, stdev=12.65
     lat (usec): min=48, max=2834, avg=78.59, stdev=12.65
    clat percentiles (usec):
     |  1.00th=[   52],  5.00th=[   70], 10.00th=[   71], 20.00th=[   72],
     | 30.00th=[   72], 40.00th=[   73], 50.00th=[   74], 60.00th=[   77],
     | 70.00th=[   86], 80.00th=[   88], 90.00th=[   89], 95.00th=[   95],
     | 99.00th=[  127], 99.50th=[  137], 99.90th=[  153], 99.95th=[  161],
     | 99.99th=[  190]
   bw (  KiB/s): min=49776, max=51912, per=12.59%, avg=50713.30, stdev=309.76, samples=827
   iops        : min=12444, max=12978, avg=12678.31, stdev=77.41, samples=827
  lat (usec)   : 50=0.20%, 100=95.92%, 250=3.87%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.65%, sys=4.89%, ctx=5246845, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random-read: (groupid=0, jobs=1): err= 0: pid=143339: Thu Oct  5 23:23:49 2023
  read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(20.0GiB/394165msec)
    clat (usec): min=47, max=2904, avg=74.83, stdev=15.26
     lat (usec): min=47, max=2904, avg=74.86, stdev=15.26
    clat percentiles (usec):
     |  1.00th=[   50],  5.00th=[   51], 10.00th=[   53], 20.00th=[   70],
     | 30.00th=[   71], 40.00th=[   72], 50.00th=[   73], 60.00th=[   75],
     | 70.00th=[   85], 80.00th=[   87], 90.00th=[   89], 95.00th=[   94],
     | 99.00th=[  126], 99.50th=[  135], 99.90th=[  153], 99.95th=[  161],
     | 99.99th=[  188]
   bw (  KiB/s): min=52096, max=54312, per=13.21%, avg=53229.10, stdev=312.71, samples=788
   iops        : min=13024, max=13578, avg=13307.26, stdev=78.16, samples=788
  lat (usec)   : 50=1.65%, 100=94.73%, 250=3.62%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.67%, sys=5.07%, ctx=5246945, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5242880,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=393MiB/s (412MB/s), 49.2MiB/s-59.3MiB/s (51.6MB/s-62.2MB/s), io=160GiB (172GB), run=345469-416501msec

Disk stats (read/write):
    md127: ios=41940854/14, merge=0/0, ticks=2850618/100, in_queue=2850718, util=100.00%, aggrios=20971520/47, aggrmerge=0/2, aggrticks=1428779/139, aggrin_queue=1428932, aggrutil=100.00%
  nvme0n1: ios=21179809/47, merge=0/2, ticks=1427626/139, in_queue=1427779, util=100.00%
  nvme1n1: ios=20763231/47, merge=0/2, ticks=1429932/139, in_queue=1430086, util=100.00%
mdadm
  • 1 个回答
  • 291 Views
Martin Hope
Codemonkey
Asked: 2022-03-04 06:24:00 +0800 CST

`mdadm --manage /dev/md1` 有什么作用?我需要“撤消”它吗?

  • 0

我跑了这个,希望能对我的数组有所了解(我现在知道我应该使用--detail)。但我不清楚该--manage选项的作用。它没有输出,只是让我回到 bash 提示符。

我是否需要运行一些命令来撤消所做的一切?

mdadm
  • 1 个回答
  • 37 Views
Martin Hope
James
Asked: 2021-10-29 08:16:23 +0800 CST

如何调整 RAID 阵列上的文件系统的大小?

  • 0

我最近在我的软件 RAID 阵列中添加了第 5 个驱动器——mdadm 接受了它:

$ lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1        259:0    0 894.3G  0 disk
├─nvme0n1p1    259:4    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme0n1p2    259:5    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme3n1        259:1    0 894.3G  0 disk
├─nvme3n1p1    259:6    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme3n1p2    259:7    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme2n1        259:2    0 894.3G  0 disk
├─nvme2n1p1    259:8    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme2n1p2    259:9    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme1n1        259:3    0 894.3G  0 disk
├─nvme1n1p1    259:10   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme1n1p2    259:11   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme4n1        259:12   0 894.3G  0 disk
├─nvme4n1p1    259:15   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme4n1p2    259:16   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md0 : active raid1 nvme4n1p1[4] nvme1n1p1[2] nvme3n1p1[0] nvme0n1p1[3] nvme2n1p1[1]
      523264 blocks super 1.2 [5/5] [UUUUU]

md1 : active raid5 nvme4n1p2[5] nvme2n1p2[1] nvme1n1p2[2] nvme3n1p2[0] nvme0n1p2[4]
      3748134912 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 3/7 pages [12KB], 65536KB chunk

unused devices: <none>

问题是我的文件系统仍然认为我只连接了 4 个驱动器,并且没有增长到利用额外的驱动器。

我试过了

$ sudo e2fsck -fn /dev/md1
e2fsck 1.45.5 (07-Jan-2020)
Warning!  /dev/md1 is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/md1 contains a LVM2_member file system

和

$ sudo resize2fs /dev/md1
resize2fs 1.45.5 (07-Jan-2020)
resize2fs: Device or resource busy while trying to open /dev/md1
Couldn't find valid filesystem superblock.

但到目前为止还没有运气:

$ df
Filesystem            1K-blocks       Used Available Use% Mounted on
udev                  131841212          0 131841212   0% /dev
tmpfs                  26374512       2328  26372184   1% /run
/dev/mapper/vg0-root 2681290296 2329377184 215641036  92% /
tmpfs                 131872540          0 131872540   0% /dev/shm
tmpfs                      5120          0      5120   0% /run/lock
tmpfs                 131872540          0 131872540   0% /sys/fs/cgroup
/dev/md0                 498532      86231    386138  19% /boot
/dev/mapper/vg0-tmp    52427196     713248  51713948   2% /tmp
tmpfs                  26374508          0  26374508   0% /run/user/1001
tmpfs                  26374508          0  26374508   0% /run/user/1002

我希望这是足够的信息 - 但如果有用,我很乐意提供更多信息。

raid mdadm
  • 1 个回答
  • 269 Views
Martin Hope
Jayson Reis
Asked: 2021-10-27 08:35:06 +0800 CST

如何使用 mdadm + luks + lvm 提高 RAID 5 的速度

  • 2

我想我对当前的服务器设置有点迷失了。它是 HP Proliant dl160 gen 6,我放置了 4 个旋转磁盘,其设置具有 mdmadm + luks + lvm 和 btrfs(也许我走得太远了?),它读取的 IO 速度确实受到了影响50MB/s 和 2MB/s 左右的写入速度,我感觉我搞砸了。

我注意到的一件事是我在块设备(sbd)上而不是在分区(sdb1)上设置了 mdadm,这会影响什么吗?

在这里可以看到fio --name=randwrite --rw=randwrite --direct=1 --bs=16k --numjobs=128 --size=200M --runtime=60 --group_reporting机器上几乎没有用的时候fio的输出。

randwrite: (groupid=0, jobs=128): err= 0: pid=54290: Tue Oct 26 16:21:50 2021
  write: IOPS=137, BW=2193KiB/s (2246kB/s)(131MiB/61080msec); 0 zone resets
    clat (msec): min=180, max=2784, avg=924.48, stdev=318.02
     lat (msec): min=180, max=2784, avg=924.48, stdev=318.02
    clat percentiles (msec):
     |  1.00th=[  405],  5.00th=[  542], 10.00th=[  600], 20.00th=[  693],
     | 30.00th=[  760], 40.00th=[  818], 50.00th=[  860], 60.00th=[  927],
     | 70.00th=[ 1011], 80.00th=[ 1133], 90.00th=[ 1267], 95.00th=[ 1452],
     | 99.00th=[ 2165], 99.50th=[ 2232], 99.90th=[ 2635], 99.95th=[ 2769],
     | 99.99th=[ 2769]
   bw (  KiB/s): min= 3972, max= 4735, per=100.00%, avg=4097.79, stdev= 1.58, samples=8224
   iops        : min=  132, max=  295, avg=248.40, stdev= 0.26, samples=8224
  lat (msec)   : 250=0.04%, 500=2.82%, 750=25.96%, 1000=40.58%, 2000=28.67%
  lat (msec)   : >=2000=1.95%
  cpu          : usr=0.00%, sys=0.01%, ctx=18166, majf=0, minf=1412
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,8372,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=2193KiB/s (2246kB/s), 2193KiB/s-2193KiB/s (2246kB/s-2246kB/s), io=131MiB (137MB), run=61080-61080msec

使用 dd 更新 1 次顺序写入

root@hp-proliant-dl160-g6-1:~# dd if=/dev/zero of=disk-test oflag=direct bs=512k count=100
100+0 records in 100+0 records out 52428800 bytes (52 MB, 50 MiB) copied, 5.81511 s, 9.0 MB/s

内核:5.4.0-89-generic

操作系统:Ubuntu 20.04.3

mdadm:4.1-5ubuntu1.2

lvm2:2.03.07-1ubuntu1

blkid 输出

/dev/mapper/dm_crypt-0: UUID="r7TBdk-1GZ4-zbUh-007u-BfuP-dtis-bTllYi" TYPE="LVM2_member"
/dev/sda2: UUID="64528d97-f05c-4f34-a238-f7b844b3bb58" UUID_SUB="263ae70e-d2b8-4dfe-bc6b-bbc2251a9f32" TYPE="btrfs" PARTUUID="494be592-3dad-4600-b954-e2912e410b8b"
/dev/sdb: UUID="478e8132-7783-1fb1-936a-358d06dbd871" UUID_SUB="4aeb4804-6380-5421-6aea-d090e6aea8a0" LABEL="ubuntu-server:0" TYPE="linux_raid_member"
/dev/sdc: UUID="478e8132-7783-1fb1-936a-358d06dbd871" UUID_SUB="9d5a4ddd-bb9e-bb40-9b21-90f4151a5875" LABEL="ubuntu-server:0" TYPE="linux_raid_member"
/dev/sdd: UUID="478e8132-7783-1fb1-936a-358d06dbd871" UUID_SUB="f08b5e6d-f971-c622-cd37-50af8ff4b308" LABEL="ubuntu-server:0" TYPE="linux_raid_member"
/dev/sde: UUID="478e8132-7783-1fb1-936a-358d06dbd871" UUID_SUB="362025d4-a4d2-8727-6853-e503c540c4f7" LABEL="ubuntu-server:0" TYPE="linux_raid_member"
/dev/md0: UUID="a5b5bf95-1ff1-47f9-b3f6-059356e3af41" TYPE="crypto_LUKS"
/dev/mapper/vg0-lv--0: UUID="6db4e233-5d97-46d2-ac11-1ce6c72f5352" TYPE="swap"
/dev/mapper/vg0-lv--1: UUID="4e1a5131-cb91-48c4-8266-5b165d9f5071" UUID_SUB="e5fc407e-57c2-43eb-9b66-b00207ea6d91" TYPE="btrfs"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/loop8: TYPE="squashfs"
/dev/loop9: TYPE="squashfs"
/dev/loop10: TYPE="squashfs"
/dev/sda1: PARTUUID="fa30c3f5-6952-45f0-b844-9bfb46fa0224"

猫 /proc/mdstat

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdb[0] sdc[1] sdd[2] sde[4]
      5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/15 pages [8KB], 65536KB chunk

unused devices: <none>

lshw -c 磁盘

  *-disk
       description: SCSI Disk
       product: DT 101 G2
       vendor: Kingston
       physical id: 0.0.0
       bus info: scsi@0:0.0.0
       logical name: /dev/sda
       version: 1.00
       serial: xxxxxxxxxxxxxxxxxxxx
       size: 7643MiB (8015MB)
       capabilities: removable
       configuration: ansiversion=4 logicalsectorsize=512 sectorsize=512
     *-medium
          physical id: 0
          logical name: /dev/sda
          size: 7643MiB (8015MB)
          capabilities: gpt-1.00 partitioned partitioned:gpt
          configuration: guid=6c166e3e-27c9-4edf-9b0d-e21892cbce41
  *-disk
       description: ATA Disk
       product: ST2000DM008-2FR1
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/sdb
       version: 0001
       serial: xxxxxxxxxxxxxxxxxxxx
       size: 1863GiB (2TB)
       capabilities: removable
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
     *-medium
          physical id: 0
          logical name: /dev/sdb
          size: 1863GiB (2TB)
  *-disk
       description: ATA Disk
       product: ST2000DM008-2FR1
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sdc
       version: 0001
       serial: xxxxxxxxxxxxxxxxxxxx
       size: 1863GiB (2TB)
       capabilities: removable
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
     *-medium
          physical id: 0
          logical name: /dev/sdc
          size: 1863GiB (2TB)
  *-disk
       description: ATA Disk
       product: WDC WD20EZBX-00A
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdd
       version: 1A01
       serial: xxxxxxxxxxxxxxxxxxxx
       size: 1863GiB (2TB)
       capabilities: removable
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
     *-medium
          physical id: 0
          logical name: /dev/sdd
          size: 1863GiB (2TB)
  *-disk
       description: ATA Disk
       product: WDC WD20EZBX-00A
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@4:0.0.0
       logical name: /dev/sde
       version: 1A01
       serial: xxxxxxxxxxxxxxxxxxxx
       size: 1863GiB (2TB)
       capabilities: removable
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
     *-medium
          physical id: 0
          logical name: /dev/sde
          size: 1863GiB (2TB)

您在设置中看到任何可能有问题的地方吗?您认为添加带有 PCI 卡的 nvme 并将其用于缓存会有所帮助吗?

linux lvm mdadm btrfs
  • 2 个回答
  • 1051 Views
Martin Hope
Will Roberts
Asked: 2021-10-26 09:54:22 +0800 CST

正确引导缺少驱动器或故障驱动器的基于软件的 RAID1

  • 3

tl;博士。 有没有办法在驱动器丢失或故障(不是用户首先失败)的情况下正确启动基于软件的 RAID1?

需要明确的是,如果在重新启动之前正确地使驱动器发生故障,则可以在没有硬盘驱动器的情况下启动基于软件的 RAID1。我知道这是主观的,但这似乎不是一个合理的解决方案,也不是一个可接受的答案。例如; 设施受到电源冲击,并且硬盘驱动器在断电的同时发生故障。尝试使用未“正确”失败的降级硬盘启动将导致系统进入紧急模式。

我从这里和其他论坛阅读了许多帖子,都建议您在所有分区上安装 grub,或者手动重建 grub,添加nofail到/etc/fstab选项或其他看似简单的解决方案;但现实情况是,这些建议都没有奏效。

虽然我已经接受了这是不可能的,但关于这件事的一些事情并不容易。因此,我正在查看是否有其他人有此问题或对此问题有解决方案。

我的环境:

我有一个不支持 UEFI 的旧主板,所以我启动了传统模式/MBR。
操作系统:

cat /etc/redhat-release
Red Hat Enterprise Linux Workstation release 7.6 (Maipo)

核心:

uname –r
3.10.0-957.el7.x86_64

妈妈:

mdadm –version
mdadm – v4.1-rc1 2018-03-22

我的 RAID 是跨三个驱动器的 RAID1。( sda,sdb,sdc) 并且有 4 个分区

md1 - /boot
md2 - /home
md3 - /
md4 - swap

我已经在所有分区上安装了 grub,并确保所有引导分区都有引导标志。 fdisk /dev/sd[a,b,c]all*在适当分区旁边的引导字段中 显示 a
- 和 -
grub2-install /dev/sd[a,b,c](作为单独的命令,具有“成功安装”结果)。

复制问题:

  1. 在分配给 RAID 的所有驱动器和 RAID 完全可操作的情况下关闭系统。
  2. 卸下硬盘
  3. 电源系统启动

结果: 系统将通过 grub 引导。Gdm 将尝试显示登录屏幕,但大约 20 秒后,它将失败并掉到紧急控制台。“正常”系统中有许多缺失的部分。例如; /boot 和 /etc 不存在。似乎没有任何内核恐慌消息或问题显示在dmesg.

同样,这里的关键是;RAID 必须完全组装,关闭电源并卸下驱动器。如果您正确地使驱动器发生故障并将其从 RAID 中删除,那么您可以在没有驱动器的情况下启动。

示例:(
mdadm --manage /dev/md[1,2,3,4] --fail /dev/sda[1,2,3,4]作为单独的命令)
mdadm --manage /dev/md[1,2,3,4] --remove /dev/sda[1,2,3,4](作为单独的命令)

我知道这似乎微不足道,但我还没有找到一个可行的解决方案来引导具有降级 RAID1 的系统。您会认为这应该是一个简单解决方案的简单问题,但事实并非如此。

任何帮助、输入或建议将不胜感激。

linux redhat boot mdadm software-raid
  • 1 个回答
  • 351 Views
Martin Hope
newbie
Asked: 2021-10-22 01:16:22 +0800 CST

降级后的 RAID1 恢复

  • 0

下面是我的 2 磁盘 Raid1 阵列的 lsblk、mdadm 和 /proc/mdstat 的输出

anand@ironman:~$ lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                         8:0    0 465.8G  0 disk  
|-sda1                      8:1    0   976M  0 part  
| `-md0                     9:0    0 975.4M  0 raid1 
|   `-vg_boot-boot (dm-6) 253:6    0   972M  0 lvm   /boot
`-sda2                      8:2    0 464.8G  0 part  
sdb                         8:16   0 465.8G  0 disk  
|-sdb1                      8:17   0   976M  0 part  
`-sdb2                      8:18   0 464.8G  0 part  
  `-md1                     9:1    0 464.7G  0 raid1 
    |-vg00-root (dm-0)    253:0    0  93.1G  0 lvm   /
    |-vg00-home (dm-1)    253:1    0  96.6G  0 lvm   /home
    |-vg00-var (dm-2)     253:2    0  46.6G  0 lvm   /var
    |-vg00-usr (dm-3)     253:3    0  46.6G  0 lvm   /usr
    |-vg00-swap1 (dm-4)   253:4    0   7.5G  0 lvm   [SWAP]
    `-vg00-tmp (dm-5)     253:5    0   952M  0 lvm   /tmp

anand@ironman:~$ cat /proc/mdstat
Personalities : [raid1] 
md1 : active raid1 sdb2[1]
      487253824 blocks super 1.2 [2/1] [_U]
      
md0 : active raid1 sda1[0]
      998848 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

anand@ironman:~$ sudo mdadm -D /dev/md0 /dev/md1
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 22 21:00:35 2013
     Raid Level : raid1
     Array Size : 998848 (975.60 MiB 1022.82 MB)
  Used Dev Size : 998848 (975.60 MiB 1022.82 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 21 14:35:36 2021
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ironman:0  (local to host ironman)
           UUID : cbcb9fb6:f7727516:9328d30a:0a970c9b
         Events : 4415

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       0        0        1      removed
/dev/md1:
        Version : 1.2
  Creation Time : Wed May 22 21:00:47 2013
     Raid Level : raid1
     Array Size : 487253824 (464.68 GiB 498.95 GB)
  Used Dev Size : 487253824 (464.68 GiB 498.95 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 21 14:35:45 2021
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ironman:1  (local to host ironman)
           UUID : 3f64c0ce:fcb9ff92:d5fd68d7:844b7e12
         Events : 63025777

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2

用于从 raid1 故障中恢复的命令是什么?

我是否必须获得一个新硬盘才能安全地重新组装 raid1 设置?

更新1:

    anand@ironman:~$ sudo smartctl -H /dev/sda 
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Please note the following marginal Attributes:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
190 Airflow_Temperature_Cel 0x0022   054   040   045    Old_age   Always   In_the_past 46 (0 174 46 28)

anand@ironman:~$ sudo smartctl -H /dev/sdb
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

anand@ironman:~$ 

智能信息:

smartctl -a -data /dev/sda 的输出 smartctl -a -data /dev/sdb 的输出

更新 2:

anand@ironman:~$ sudo blkid -o list
device                                  fs_type        label           mount point                                 UUID
-------------------------------------------------------------------------------------------------------------------------------------------------------
/dev/sda1                               linux_raid_member ironman:0    (in use)                                    cbcb9fb6-f772-7516-9328-d30a0a970c9b
/dev/sda2                               linux_raid_member ironman:1    (not mounted)                               3f64c0ce-fcb9-ff92-d5fd-68d7844b7e12
/dev/sdb1                               linux_raid_member ironman:0    (not mounted)                               cbcb9fb6-f772-7516-9328-d30a0a970c9b
/dev/sdb2                               linux_raid_member ironman:1    (in use)                                    3f64c0ce-fcb9-ff92-d5fd-68d7844b7e12
/dev/md0                                LVM2_member                    (in use)                                    JKI3Lr-VdDK-Ogsk-KOQk-jSKJ-udAV-Vt4ckP
/dev/md1                                LVM2_member                    (in use)                                    CAqW3D-WJ7g-2lbw-G3cn-nidp-2jdQ-evFe7r
/dev/mapper/vg00-root                   ext4           root            /                                           82334ff8-3eff-4fc7-9b86-b11eeda314ae
/dev/mapper/vg00-home                   ext4           home            /home                                       8e9f74dd-08e4-45a3-a492-d4eaf22a1d68
/dev/mapper/vg00-var                    ext4           var             /var                                        0e798199-3219-458d-81b8-b94a5736f1be
/dev/mapper/vg00-usr                    ext4           usr             /usr                                        d8a335fc-72e6-4b98-985e-65cff08c4e22
/dev/mapper/vg00-swap1                  swap                           <swap>                                      b95ee4ca-fcca-487f-b6ff-d6c0d49426d8
/dev/mapper/vg00-tmp                    ext4           tmp             /tmp                                        c879fae8-bd25-431d-be3e-6120d0381cb8
/dev/mapper/vg_boot-boot                ext4           boot            /boot                                       12684df6-6c4a-450f-8ed1-d3149609a149

-- 结束更新 2

更新 3 - 遵循 Nikita 的建议后:

/dev/md0:                                                                   │                                                                           
        Version : 1.2                                                       │                                                                           
  Creation Time : Wed May 22 21:00:35 2013                                  │                                                                           
     Raid Level : raid1                                                     │                                                                           
     Array Size : 998848 (975.60 MiB 1022.82 MB)                            │                                                                           
  Used Dev Size : 998848 (975.60 MiB 1022.82 MB)                            │                                                                           
   Raid Devices : 2                                                         │                                                                           
  Total Devices : 2                                                         │                                                                           
    Persistence : Superblock is persistent                                  │                                                                           
                                                                            │                                                                           
    Update Time : Fri Oct 22 21:20:09 2021                                  │                                                                           
          State : clean                                                     │                                                                           
 Active Devices : 2                                                         │                                                                           
Working Devices : 2                                                         │                                                                           
 Failed Devices : 0                                                         │                                                                           
  Spare Devices : 0                                                         │                                                                           
                                                                            │                                                                           
           Name : ironman:0  (local to host ironman)                        │                                                                           
           UUID : cbcb9fb6:f7727516:9328d30a:0a970c9b                       │                                                                           
         Events : 4478                                                      │                                                                           
                                                                            │                                                                           
    Number   Major   Minor   RaidDevice State                               │                                                                           
       0       8        1        0      active sync   /dev/sda1             │                                                                           
       2       8       17        1      active sync   /dev/sdb1   



anand@ironman:~/.scripts/automatem/bkp$ sudo mdadm -D /dev/md1              │                                                                           
/dev/md1:                                                                   │                                                                           
        Version : 1.2                                                       │                                                                           
  Creation Time : Wed May 22 21:00:47 2013                                  │                                                                           
     Raid Level : raid1                                                     │                                                                           
     Array Size : 487253824 (464.68 GiB 498.95 GB)                          │                                                                           
  Used Dev Size : 487253824 (464.68 GiB 498.95 GB)                          │                                                                           
   Raid Devices : 2                                                         │                                                                           
  Total Devices : 2                                                         │                                                                           
    Persistence : Superblock is persistent                                  │                                                                           
                                                                            │                                                                           
    Update Time : Fri Oct 22 21:21:37 2021                                  │                                                                           
          State : clean                                                     │                                                                           
 Active Devices : 2                                                         │                                                                           
Working Devices : 2                                                         │                                                                           
 Failed Devices : 0                                                         │                                                                           
  Spare Devices : 0                                                         │                                                                           
                                                                            │                                                                           
           Name : ironman:1  (local to host ironman)                        │                                                                           
           UUID : 3f64c0ce:fcb9ff92:d5fd68d7:844b7e12                       │                                                                           
         Events : 63038935                                                  │                                                                           
                                                                            │                                                                           
    Number   Major   Minor   RaidDevice State                               │                                                                           
       2       8       18        0      active sync   /dev/sdb2             │                                                                           
       1       8       34        1      active sync   /dev/sdc2 

谢谢你们!

阿南德

mdadm raid1
  • 1 个回答
  • 108 Views
Martin Hope
Gradyn Wursten
Asked: 2021-10-15 16:55:44 +0800 CST

每次启动都必须手动构建 RAID 阵列,并且无法添加第三个驱动器 - MDADM

  • 0

我有一个 RAID1 阵列,每次系统启动很长时间时,我都必须手动重建它。从来没有时间弄清楚为什么。这是我每次启动时用来重建它的命令: sudo mdadm --build /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sde1

这很好用,不会丢失任何数据。然后我可以在需要的地方手动挂载 /dev/md0 (在这种情况下是 /mnt/plex)。但是,我刚刚在我的服务器中安装了第三个硬盘驱动器,我想升级到 RAID5。我使用 cfdisk 在我的驱动器上创建一个分区。

然后我将阵列升级到 RAID5:
sudo mdadm --grow /dev/md0 -l 5

然后,我将新驱动器添加到阵列中 sudo mdadm /dev/md0 --add /dev/sda1

最后,我尝试将阵列增加到 3 个驱动器 sudo mdadm /dev/md0 --grow -n 3 ,此时出现以下错误:

mdadm: ARRAY line /dev/md0 has no identity information.
mdadm: /dev/md0: cannot get superblock from /dev/sda1

第一个错误出现了很多,这是导致问题的第二个错误。为什么我不能将 /dev/sda1 添加到数组中?当我这样做时,为什么系统启动时阵列不自动构建?

如果有帮助,这是我的驱动器/分区:

sda       8:0    0   3.7T  0 disk
+-sda1    8:1    0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdb       8:16   0   3.7T  0 disk
+-sdb1    8:17   0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdc       8:32   0 931.5G  0 disk
+-md1     9:1    0 931.4G  0 raid1 /mnt/nas
sdd       8:48   0 931.5G  0 disk
+-md1     9:1    0 931.4G  0 raid1 /mnt/nas
sde       8:64   0   3.7T  0 disk
+-sde1    8:65   0   3.7T  0 part
  +-md0   9:0    0   3.7T  0 raid5 /mnt/plex
sdf       8:80   0 149.1G  0 disk
+-sdf1    8:81   0   512M  0 part  /boot/efi
+-sdf2    8:82   0 148.6G  0 part  /

SDB 和 SDF 是正常运行的 RAID 成员。如果有帮助,这里是来自 mdadm 的数组的详细信息

gradyn@hbi-server:~$ sudo mdadm --detail /dev/md0
mdadm: ARRAY line /dev/md0 has no identity information.
/dev/md0:
           Version :
     Creation Time : Thu Oct 14 22:19:50 2021
        Raid Level : raid5
        Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
     Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
      Raid Devices : 2
     Total Devices : 3

             State : clean
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

       2       8        1        -      spare   /dev/sda1

linux ubuntu raid mdadm
  • 1 个回答
  • 251 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve