我在 linux 上运行最新的 Debian 7.7 x86 和 ZFS
把我的电脑搬到另一个房间后。如果我做一个 zpool status 我得到这个状态:
pool: solaris
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: none requested
config:
NAME STATE READ WRITE CKSUM
solaris DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
11552884637030026506 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1
ata-Hitachi_HDS723020BLA642_MN1221F308D55D ONLINE 0 0 0
ata-Hitachi_HDS723020BLA642_MN1220F30N4JED ONLINE 0 0 0
ata-Hitachi_HDS723020BLA642_MN1220F30N4B2D ONLINE 0 0 0
ata-Hitachi_HDS723020BLA642_MN1220F30JBJ8D ONLINE 0 0 0
它说不可用的磁盘是 /dev/sdb1 经过一番调查,我发现 ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1 只是对 /dev/sdb1 微笑,它确实存在:
lrwxrwxrwx 1 root root 10 Jan 3 14:49 /dev/disk/by-id/ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1 -> ../../sdb1
如果我检查智能状态,例如:
# smartctl -H /dev/sdb
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
磁盘在那里。我可以在它上面做 fdisk ,以及其他一切。
如果我尝试将其分离,例如:
zpool detach solaris 11552884637030026506
cannot detach 11552884637030026506: only applicable to mirror and replacing vdevs
我还尝试了 /dev/sdb /dev/sdb1 和长的 ID 名称。一直都是同样的错误。
我也无法替换它,或者其他任何东西。我什至尝试关闭并再次打开计算机,但无济于事。
除非我真的自己更换硬盘,否则我看不到任何解决此问题的方法。
想法?
[更新] 犹豫
# blkid
/dev/mapper/q-swap_1: UUID="9e611158-5cbe-45d7-9abb-11f3ea6c7c15" TYPE="swap"
/dev/sda5: UUID="OeR8Fg-sj0s-H8Yb-32oy-8nKP-c7Ga-u3lOAf" TYPE="LVM2_member"
/dev/sdb1: UUID="a515e58f-1e03-46c7-767a-e8328ac945a1" UUID_SUB="7ceeedea-aaee-77f4-d66d-4be020930684" LABEL="q.heima.net:0" TYPE="linux_raid_member"
/dev/sdf1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="9314525646988684217" TYPE="zfs_member"
/dev/sda1: UUID="6dfd5546-00ca-43e1-bdb7-b8deff84c108" TYPE="ext2"
/dev/sdd1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="1776290389972032936" TYPE="zfs_member"
/dev/sdc1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="2569788348225190974" TYPE="zfs_member"
/dev/sde1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="10515322564962014006" TYPE="zfs_member"
/dev/mapper/q-root: UUID="07ebd258-840d-4bc2-9540-657074874067" TYPE="ext4"
禁用 mdadm 并重新启动后,此问题又回来了不确定为什么 sdb 被标记为 linux_raid_member。如何清除?