AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-283629

Brian Thomas's questions

Martin Hope
Brian Thomas
Asked: 2020-01-23 15:42:12 +0800 CST

zfs raidz-2 如何从 3 个驱动器故障中恢复?

  • 2

我想知道发生了什么,ZFS 是如何完全恢复的,或者我的数据是否仍然完好无损。
当我昨晚进来时,我感到沮丧,然后感到困惑。

zpool status
  pool: san
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: resilvered 392K in 0h0m with 0 errors on Tue Jan 21 16:36:41 2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        san                                           DEGRADED     0     0     0
          raidz2-0                                    DEGRADED     0     0     0
            ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346  ONLINE       0     0     0
            ata-ST2000DM001-9YN164_W1E07E0G           DEGRADED     0     0    38  too many errors
            ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332  DEGRADED     0     0    63  too many errors
            ata-ST2000NM0011_Z1P07NVZ                 ONLINE       0     0     0
            ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ490344  ONLINE       0     0     0
            wwn-0x50014ee20949b6f9                    DEGRADED     0     0    75  too many errors

errors: No known data errors 

怎么可能没有数据错误,并且整个池都没有故障?

一个驱动器sdf对 SMART 的 smartctl 测试失败read fail,其他驱动器的问题稍小;不可纠正/未决扇区或 UDMA CRC 错误。

我尝试将每个发生故障的驱动器切换到离线状态,然后一次切换到一个在线状态,但这没有帮助。

    $ zpool status
  pool: san
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: resilvered 392K in 0h0m with 0 errors on Tue Jan 21 16:36:41 2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        san                                           DEGRADED     0     0     0
          raidz2-0                                    DEGRADED     0     0     0
            ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346  ONLINE       0     0     0
            ata-ST2000DM001-9YN164_W1E07E0G           DEGRADED     0     0    38  too many errors
            ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332  OFFLINE      0     0    63
            ata-ST2000NM0011_Z1P07NVZ                 ONLINE       0     0     0
            ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ490344  ONLINE       0     0     0
            wwn-0x50014ee20949b6f9                    DEGRADED     0     0    75  too many errors

因此,如果我的数据实际上仍然全部存在,我感到非常幸运,或者有点困惑,在检查了最差的驱动器之后,我用我唯一的备用驱动器进行了更换。

    $ zpool status
  pool: san
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue Jan 21 17:33:15 2020
        467G scanned out of 8.91T at 174M/s, 14h10m to go
        77.6G resilvered, 5.12% done
config:

        NAME                                              STATE     READ WRITE CKSUM
        san                                               DEGRADED     0     0     0
          raidz2-0                                        DEGRADED     0     0     0
            ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346      ONLINE       0     0     0
            replacing-1                                   DEGRADED     0     0     0
              ata-ST2000DM001-9YN164_W1E07E0G             OFFLINE      0     0    38
              ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1171516  ONLINE       0     0     0  (resilvering)
            ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332      DEGRADED     0     0    63  too many errors
            ata-ST2000NM0011_Z1P07NVZ                     ONLINE       0     0     0
            ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ490344      ONLINE       0     0     0
            wwn-0x50014ee20949b6f9                        DEGRADED     0     0    75  too many errors

resilver 确实成功完成。

$ zpool status
  pool: san
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: resilvered 1.48T in 12h5m with 0 errors on Wed Jan 22 05:38:48 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        san                                             DEGRADED     0     0     0
          raidz2-0                                      DEGRADED     0     0     0
            ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346    ONLINE       0     0     0
            ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1171516  ONLINE       0     0     0
            ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332    DEGRADED     0     0    63  too many errors
            ata-ST2000NM0011_Z1P07NVZ                   ONLINE       0     0     0
            ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ490344    ONLINE       0     0     0
            wwn-0x50014ee20949b6f9                      DEGRADED     0     0    75  too many errors

我现在正处于十字路口。我通常dd将故障驱动器的前 2MB 归零,然后用它自己替换,我可以这样做,但是如果确实有数据丢失,我可能需要最后两个卷来恢复。

我现在桌子上有这个sdf,已删除。我觉得我可以,在最坏的情况下,用这个来帮助恢复。

同时,我想我现在要对降级驱动器的前几 MB 进行开发/归零,并自行更换,我认为事情应该会解决,冲洗并重复第二个故障驱动器,直到我能得到一些替换手上。

问题 发生了什么,池如何能够挂起,或者我可能丢失了一些数据(考虑到 zfs 及其报告的完整性,值得怀疑)

可能是由于幸运的失败顺序,例如失败的堆栈的顶部驱动器?

问题 这只是仅供参考,与主题无关。是什么导致所有 3 个同时失败?我认为这是一种磨砂膏,它是催化剂。我前一天晚上检查了所有驱动器都在线。

请注意,最近布线一直是个问题,办公室晚上很冷,但这些问题只是drive unavailable,而不是校验和错误。我认为那不是布线,而是老化的驱动器,它们已经 5 年了。但是一天3次失败?来吧,这足以吓到我们很多人!

zfs redundancy zfsonlinux raidz
  • 1 个回答
  • 1258 Views
Martin Hope
Brian Thomas
Asked: 2017-12-06 16:23:51 +0800 CST

更换 zfs 磁盘后损坏的引导顺序 zfs 引导驱动器

  • 2

我的系统很好并且已配置,然后我的zfs( raidz2) 驱动器出现故障。我换了那个驱动器,但它不会注册。因此,当我重新启动系统时,系统不会启动,直到其中一个未知的阵列驱动器断开连接(我认为它是新的)。

我已经启动(通过断开它们,然后在启动过程的早期重新连接它们),zfs成功更换了驱动器,并且有了一个工作系统。但是我现在需要修复启动问题。

看着fstab,它似乎是正确的uuid,所以我看不出挂断是什么。

UUID=bbc69fc6-12fa-499a-a0c6-e0f65e248ce2 /                       xfs     defaults        0 0
UUID=226e836d-7b8e-424c-b0a0-0397ee458c7c /boot                   xfs     defaults        0 0
UUID=60c94586-7d6a-4e8a-b350-04719990cb69 /home                   xfs     defaults        0 0
UUID=4d91f3bb-8c97-43c8-acea-fb1dd1fe0ed7 swap                    swap    defaults        0 0

这是blkid

/dev/sda1: LABEL="san" UUID="6838649739541725191" UUID_SUB="4029408817980194900" TYPE="zfs_member" PARTLABEL="zfs-288cf7ef18c79daa" PARTUUID="ec08031c-df8f-cd4b-9e38-010b5e967cab"
/dev/sdb1: LABEL="System Reserved" UUID="A2885ECD885EA019" TYPE="ntfs"
/dev/sdb2: UUID="5E2E62DB2E62ABA9" TYPE="ntfs"
/dev/sdb3: UUID="226e836d-7b8e-424c-b0a0-0397ee458c7c" TYPE="xfs"
/dev/sdb5: UUID="60c94586-7d6a-4e8a-b350-04719990cb69" TYPE="xfs"
/dev/sdb6: UUID="4d91f3bb-8c97-43c8-acea-fb1dd1fe0ed7" TYPE="swap"
/dev/sdb7: UUID="bbc69fc6-12fa-499a-a0c6-e0f65e248ce2" TYPE="xfs"
/dev/sdc1: LABEL="san" UUID="6838649739541725191" UUID_SUB="13087102930353693443" TYPE="zfs_member" PARTLABEL="zfs-1e90ee20c4627577" PARTUUID="00e53f8e-9545-844d-9a0e-6c8746643114"
/dev/sdd1: LABEL="san" UUID="6838649739541725191" UUID_SUB="2133500285998926230" TYPE="zfs_member" PARTLABEL="zfs-19ae99cec015d0db" PARTUUID="440f2613-f23b-3c4e-bd90-ce2ef28f3e9f"
/dev/sde1: LABEL="san" UUID="6838649739541725191" UUID_SUB="7987608574075307207" TYPE="zfs_member" PARTLABEL="zfs-8427c3bf89616cda" PARTUUID="6792f785-4803-1643-888b-a98fd6f6743e"
/dev/sdf1: LABEL="san" UUID="6838649739541725191" UUID_SUB="676738182062217510" TYPE="zfs_member" PARTLABEL="zfs-061b31fabbe106cb" PARTUUID="1f50712e-0c01-d445-9ad7-381d08307c2b"
/dev/sdg1: LABEL="san" UUID="6838649739541725191" UUID_SUB="10361692541083745258" TYPE="zfs_member" PARTLABEL="zfs-5d020760c598b14c" PARTUUID="eaae6308-64b3-004d-a7c8-be4e55c8c859"
/dev/sda9: PARTUUID="4aa5c270-b2c6-4342-aea0-5ae7f4a1eba4"
/dev/sdc9: PARTUUID="c06d2bcf-5c87-f24c-8782-aed395d053d7"
/dev/sdd9: PARTUUID="ec587856-71ad-5d42-9ad0-8251ee74f151"
/dev/sde9: PARTUUID="80203adf-4e65-5e42-8e9b-2a6ccf0eafca"
/dev/sdf9: PARTUUID="ea6c550c-f1a7-4a48-bf51-72c4ba44ab00"
/dev/sdg9: PARTUUID="b0e178b5-12ec-ac44-a5a8-1a05228e2015"

当它们连接时,症状是在某个地方发布失败,因此它不会完全加载linux内核,发布时卡住,光标仅在屏幕上。通常我会看到这个光标闪烁,向下跳几行,然后linux内核给我引导选择。

现在,当它跳跃时,它停止了,哈哈。

仔细看,我看到那里的ntfs条目(sdb1),这到底是什么,这可能是问题吗?这很可能是我正在使用的东西,去年我设置了这一切。

我从哪里开始调试呢?

根据要求: @Michael Hampton

当前是新驱动器/dev/sdg,引导驱动器是/dev/sda。在我做之前zfs replace,我记得sda它有时会随机切换到启动驱动器/dev/sdb,但仍然启动,这可能是我最近启动时所有连接的问题的一部分。

这是我的分区表

$ fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x44fdfe06

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      206847      102400    7  HPFS/NTFS/exFAT
/dev/sda2          206848   256002047   127897600    7  HPFS/NTFS/exFAT
/dev/sda3       256002048   257026047      512000   83  Linux
/dev/sda4       257026048   976773119   359873536    5  Extended
/dev/sda5       257028096   467412991   105192448   83  Linux
/dev/sda6       467415040   479737855     6161408   82  Linux swap / Solaris
/dev/sda7       479739904   563625983    41943040   83  Linux
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 18C699EF-38E1-3B4F-8D2A-07F0101E7B11


#         Start          End    Size  Type            Name
 1         2048   3907012607    1.8T  Solaris /usr &  zfs-1e90ee20c4627577
 9   3907012608   3907028991      8M  Solaris reserve
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 182F023B-C53D-4949-8CA9-209E34A8DCE3


#         Start          End    Size  Type            Name
 1         2048   3907012607    1.8T  Solaris /usr &  zfs-19ae99cec015d0db
 9   3907012608   3907028991      8M  Solaris reserve
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: BA7402DC-461B-6A4D-8611-DE3C7889E4F5


#         Start          End    Size  Type            Name
 1         2048   3907012607    1.8T  Solaris /usr &  zfs-8427c3bf89616cda
 9   3907012608   3907028991      8M  Solaris reserve
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 3252FCB6-A509-EE45-9A2B-6F6EC7612239


#         Start          End    Size  Type            Name
 1         2048   3907012607    1.8T  Solaris /usr &  zfs-061b31fabbe106cb
 9   3907012608   3907028991      8M  Solaris reserve
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 7DFD1DFA-E3D1-4D4C-BE65-3C971B422D61


#         Start          End    Size  Type            Name
 1         2048   3907012607    1.8T  Solaris /usr &  zfs-5d020760c598b14c
 9   3907012608   3907028991      8M  Solaris reserve
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 8E166E43-4E09-F44B-976F-CB2E0ED93945

再次发呆

$ blkid
/dev/sda7: UUID="bbc69fc6-12fa-499a-a0c6-e0f65e248ce2" TYPE="xfs"
/dev/sda3: UUID="226e836d-7b8e-424c-b0a0-0397ee458c7c" TYPE="xfs"
/dev/sda6: UUID="4d91f3bb-8c97-43c8-acea-fb1dd1fe0ed7" TYPE="swap"
/dev/sda1: LABEL="System Reserved" UUID="A2885ECD885EA019" TYPE="ntfs"
/dev/sda2: UUID="5E2E62DB2E62ABA9" TYPE="ntfs"
/dev/sda5: UUID="60c94586-7d6a-4e8a-b350-04719990cb69" TYPE="xfs"
/dev/sdb1: LABEL="san" UUID="6838649739541725191" UUID_SUB="13087102930353693443" TYPE="zfs_member" PARTLABEL="zfs-1e90ee20c4627577" PARTUUID="00e53f8e-9545-844d-9a0e-6c8746643114"
/dev/sdb9: PARTUUID="c06d2bcf-5c87-f24c-8782-aed395d053d7"
/dev/sdc1: LABEL="san" UUID="6838649739541725191" UUID_SUB="2133500285998926230" TYPE="zfs_member" PARTLABEL="zfs-19ae99cec015d0db" PARTUUID="440f2613-f23b-3c4e-bd90-ce2ef28f3e9f"
/dev/sdc9: PARTUUID="ec587856-71ad-5d42-9ad0-8251ee74f151"
/dev/sdd1: LABEL="san" UUID="6838649739541725191" UUID_SUB="7987608574075307207" TYPE="zfs_member" PARTLABEL="zfs-8427c3bf89616cda" PARTUUID="6792f785-4803-1643-888b-a98fd6f6743e"
/dev/sdd9: PARTUUID="80203adf-4e65-5e42-8e9b-2a6ccf0eafca"
/dev/sde1: LABEL="san" UUID="6838649739541725191" UUID_SUB="676738182062217510" TYPE="zfs_member" PARTLABEL="zfs-061b31fabbe106cb" PARTUUID="1f50712e-0c01-d445-9ad7-381d08307c2b"
/dev/sde9: PARTUUID="ea6c550c-f1a7-4a48-bf51-72c4ba44ab00"
/dev/sdf1: LABEL="san" UUID="6838649739541725191" UUID_SUB="10361692541083745258" TYPE="zfs_member" PARTLABEL="zfs-5d020760c598b14c" PARTUUID="eaae6308-64b3-004d-a7c8-be4e55c8c859"
/dev/sdf9: PARTUUID="b0e178b5-12ec-ac44-a5a8-1a05228e2015"
/dev/sdg1: LABEL="san" UUID="6838649739541725191" UUID_SUB="4029408817980194900" TYPE="zfs_member" PARTLABEL="zfs-288cf7ef18c79daa" PARTUUID="ec08031c-df8f-cd4b-9e38-010b5e967cab"
/dev/sdg9: PARTUUID="4aa5c270-b2c6-4342-aea0-5ae7f4a1eba4"

smartctl 日志

正如下面评论中提到的,我smartctl通过一个脚本保存了一些日志,用于检查驱动器的运行状况。

sda 日志,请注意它在 12 月 28 日切换了驱动器,例如

$ tail -n 80 sda.log
Reallocated sectors -  - 0"
Pending sectors-  - 24"

Mon Oct  2 21:35:53 PDT 2017
                Model Number:       ST2000DM001-1E6164
Temp-  - 35 (0 15 0 0 0)"
Hours-  - 26052"
Reallocated sectors -  - 2136"
Pending sectors-  - 840"

Sun Nov 26 21:17:10 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 37"
Hours-  - 21298"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 21:53:14 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 38"
Hours-  - 21299"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 22:32:53 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 39"
Hours-  - 21299"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 23:24:36 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 40"
Hours-  - 21300"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Thu Nov 30 18:46:03 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 35"
Hours-  - 21392"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Dec  5 17:31:57 PST 2017
                Model Number:       ST2000NM0011
Temp-  - 34 (0 25 0 0 0)"
Hours-  - 217"
Reallocated sectors -  - 438"
Pending sectors-  - 0"

Thu Dec 28 00:08:09 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 40"
Hours-  - 22037"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 13:05:22 PST 2018
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 38"
Hours-  - 22170"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 16:46:34 PST 2018
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 39"
Hours-  - 22174"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 23:09:37 PST 2018
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 40"
Hours-  - 22180"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

数据库日志

$ tail -n 80 sdb.log
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Mon Oct  2 21:35:55 PDT 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 42"
Hours-  - 19982"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 21:17:11 PST 2017
                Model Number:       ST2000DM001-9YN164
Temp-  - 34 (0 17 0 0 0)"
Hours-  - 70405"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 21:53:16 PST 2017
                Model Number:       ST2000DM001-9YN164
Temp-  - 37 (0 17 0 0 0)"
Hours-  - 70406"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 22:32:55 PST 2017
                Model Number:       ST2000DM001-9YN164
Temp-  - 38 (0 17 0 0 0)"
Hours-  - 70406"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Sun Nov 26 23:24:37 PST 2017
                Model Number:       ST2000DM001-9YN164
Temp-  - 38 (0 17 0 0 0)"
Hours-  - 70407"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Thu Nov 30 18:46:04 PST 2017
                Model Number:       ST2000DM001-9YN164
Temp-  - 31 (0 17 0 0 0)"
Hours-  - 70498"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Dec  5 17:31:58 PST 2017
                Model Number:       WDC WD5000AACS-00ZUB0
Temp-  - 38"
Hours-  - 21510"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Thu Dec 28 00:08:10 PST 2017
                Model Number:       WDC WD20EZRX-00DC0B0
Temp-  - 36"
Hours-  - 35324"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 13:05:23 PST 2018
                Model Number:       WDC WD20EZRX-00DC0B0
Temp-  - 34"
Hours-  - 35457"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 16:46:34 PST 2018
                Model Number:       WDC WD20EZRX-00DC0B0
Temp-  - 34"
Hours-  - 35460"
Reallocated sectors -  - 0"
Pending sectors-  - 0"

Tue Jan  2 23:09:37 PST 2018
                Model Number:       WDC WD20EZRX-00DC0B0
Temp-  - 36"
Hours-  - 35467"
Reallocated sectors -  - 0"
Pending sectors-  - 0"
redhat
  • 1 个回答
  • 138 Views
Martin Hope
Brian Thomas
Asked: 2016-04-04 13:07:27 +0800 CST

zfs 将线性转换为 raidz5 或重建

  • 2

我可以将我的线性跨度转换为 raidz-5 吗?我终于有了足够的硬盘驱动器,并且真的不想用他们需要保存的备份数据来重新填充它们。

所以目前,我建立的命令不是raid,而是使用线性跨度

    $ zpool status
      pool: san
     state: ONLINE
    status: Some supported features are not enabled on the pool. The pool can
            still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
            the pool may no longer be accessible by software that does not support
            the features. See zpool-features(5) for details.
      scan: scrub repaired 0 in 7h28m with 0 errors on Sat Apr  2 02:58:32 2016
    config:

        NAME        STATE     READ WRITE CKSUM
        san         ONLINE       0     0     0
          sdd       ONLINE       0     0     0
          sdc       ONLINE       0     0     0
          sde       ONLINE       0     0     0

errors: No known data errors

我可以再添加 2tb,将所有论文直接转换为 raidz5,这样我的校验和和 crc 都将变得有价值,而不会删除我的数据?

$ zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
san                                4.91T   371G  2.85M  /san

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   1.8T  0 disk
`-sda1   8:1    0   1.8T  0 part
sdb      8:16   0 232.9G  0 disk
|-sdb1   8:17   0  1007K  0 part
|-sdb2   8:18   0     4G  0 part [SWAP]
|-sdb3   8:19   0    20G  0 part /
|-sdb4   8:20   0 208.9G  0 part /home
`-sdb5   8:21   0   3.1M  0 part
sdc      8:32   0   1.8T  0 disk
|-sdc1   8:33   0   1.8T  0 part
`-sdc9   8:41   0     8M  0 part
sdd      8:48   0   1.8T  0 disk
|-sdd1   8:49   0   1.8T  0 part
`-sdd9   8:57   0     8M  0 part
sde      8:64   0   1.8T  0 disk
|-sde1   8:65   0   1.8T  0 part
`-sde9   8:73   0     8M  0 part

sda 是新人。

zfs
  • 1 个回答
  • 498 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve