我的 Proxmox 实例中有一个 ZFS 镜像池。我试图让它以只读方式安装到 LXC 容器(最好不提供完整的文件系统访问权限),这样我就可以让Homepage正确检测已用空间和可用空间以进行监控。
其他服务器故障问题和答案已报告df -hT
应该能够正确报告 zfs 文件系统的已用空间(减去奇偶校验数据,可能是压缩或重复数据删除数据,但我在这里并不寻求 100% 的准确性)
但对我来说,df -hT
将总空间报告为可用空间,将已用空间报告为 128K。
root@pve:~# df -hT /nvme
Filesystem Type Size Used Avail Use% Mounted on
nvme zfs 590G 128K 590G 1% /nvme
/nvme 是顶级 zfs 数据集,不是吗?为什么这些数据集的空间计算方式似乎很奇怪?
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nvme 309G 590G 128K /nvme
nvme/base-103-disk-0 43.4G 631G 2.19G -
nvme/nvmecheck 96K 590G 96K /mnt/nvmecheck
nvme/subvol-102-disk-0 554M 1.46G 554M /nvme/subvol-102-disk-0
nvme/subvol-105-disk-0 1.71G 2.29G 1.71G /nvme/subvol-105-disk-0
nvme/subvol-106-disk-0 625M 1.39G 625M /nvme/subvol-106-disk-0
nvme/subvol-107-disk-0 3.37G 11.6G 3.37G /nvme/subvol-107-disk-0
nvme/subvol-108-disk-0 1.18G 1.82G 1.18G /nvme/subvol-108-disk-0
nvme/subvol-109-disk-0 533M 1.48G 533M /nvme/subvol-109-disk-0
nvme/vm-100-disk-0 43.3G 624G 8.96G -
nvme/vm-101-disk-0 132G 607G 115G -
nvme/vm-104-disk-0 82.5G 661G 11.6G -
仅供参考,nvme/nvmecheck
是我制作的数据集,用于了解如何计算子数据集。它的参考就更奇怪了。
谁能告诉我发生了什么事吗?也许这是 Debian 的 zfs 实现?或者有什么问题吗df
?
zpool 状态警告我,我的 zpool 缺少可以运行 zpool 升级的功能,但我不确定这是否安全。我的 Proxmox 使用的是新的启动工具,而不是旧版的,所以这不应该是一个问题。
zfs 的USEDDS
属性可能相关,但不确定:
root@pve:~# zfs list -o space nvme
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
nvme 590G 309G 0B 128K 0B 309G
如下所示,ZFS 有一些属性128K
也可能源自这些属性。
root@pve:~# zfs get all nvme
NAME PROPERTY VALUE SOURCE
nvme type filesystem -
nvme creation Sat Jul 22 20:53 2023 -
nvme used 309G -
nvme available 590G -
nvme referenced 128K -
nvme compressratio 1.13x -
nvme mounted yes -
nvme quota none default
nvme reservation none default
nvme recordsize 128K default
nvme mountpoint /nvme default
nvme sharenfs off default
nvme checksum on default
nvme compression on local
nvme atime off local
nvme devices on default
nvme exec on default
nvme setuid on default
nvme readonly off default
nvme zoned off default
nvme snapdir hidden default
nvme aclmode discard default
nvme aclinherit restricted default
nvme createtxg 1 -
nvme canmount on default
nvme xattr on default
nvme copies 1 default
nvme version 5 -
nvme utf8only off -
nvme normalization none -
nvme casesensitivity sensitive -
nvme vscan off default
nvme nbmand off default
nvme sharesmb off default
nvme refquota none default
nvme refreservation none default
nvme guid [redacted] -
nvme primarycache all local
nvme secondarycache all default
nvme usedbysnapshots 0B -
nvme usedbydataset 128K -
nvme usedbychildren 309G -
nvme usedbyrefreservation 0B -
nvme logbias latency default
nvme objsetid 54 -
nvme dedup off default
nvme mlslabel none default
nvme sync standard default
nvme dnodesize legacy default
nvme refcompressratio 1.00x -
nvme written 128K -
nvme logicalused 164G -
nvme logicalreferenced 54.5K -
nvme volmode default default
nvme filesystem_limit none default
nvme snapshot_limit none default
nvme filesystem_count none default
nvme snapshot_count none default
nvme snapdev hidden default
nvme acltype off default
nvme context none default
nvme fscontext none default
nvme defcontext none default
nvme rootcontext none default
nvme relatime on default
nvme redundant_metadata all default
nvme overlay on default
nvme encryption off default
nvme keylocation none default
nvme keyformat none default
nvme pbkdf2iters 0 default
nvme special_small_blocks 0 default
我的 zpool 属性:
root@pve:~# zpool get all nvme
NAME PROPERTY VALUE SOURCE
nvme size 928G -
nvme capacity 15% -
nvme altroot - default
nvme health ONLINE -
nvme guid [redacted] -
nvme version - default
nvme bootfs - default
nvme delegation on default
nvme autoreplace off default
nvme cachefile - default
nvme failmode wait default
nvme listsnapshots off default
nvme autoexpand off default
nvme dedupratio 1.00x -
nvme free 782G -
nvme allocated 146G -
nvme readonly off -
nvme ashift 12 local
nvme comment - default
nvme expandsize - -
nvme freeing 0 -
nvme fragmentation 12% -
nvme leaked 0 -
nvme multihost off default
nvme checkpoint - -
nvme load_guid [redacted] -
nvme autotrim off default
nvme compatibility off default
nvme bcloneused 0 -
nvme bclonesaved 0 -
nvme bcloneratio 1.00x -
nvme feature@async_destroy enabled local
nvme feature@empty_bpobj active local
nvme feature@lz4_compress active local
nvme feature@multi_vdev_crash_dump enabled local
nvme feature@spacemap_histogram active local
nvme feature@enabled_txg active local
nvme feature@hole_birth active local
nvme feature@extensible_dataset active local
nvme feature@embedded_data active local
nvme feature@bookmarks enabled local
nvme feature@filesystem_limits enabled local
nvme feature@large_blocks enabled local
nvme feature@large_dnode enabled local
nvme feature@sha512 enabled local
nvme feature@skein enabled local
nvme feature@edonr enabled local
nvme feature@userobj_accounting active local
nvme feature@encryption enabled local
nvme feature@project_quota active local
nvme feature@device_removal enabled local
nvme feature@obsolete_counts enabled local
nvme feature@zpool_checkpoint enabled local
nvme feature@spacemap_v2 active local
nvme feature@allocation_classes enabled local
nvme feature@resilver_defer enabled local
nvme feature@bookmark_v2 enabled local
nvme feature@redaction_bookmarks enabled local
nvme feature@redacted_datasets enabled local
nvme feature@bookmark_written enabled local
nvme feature@log_spacemap active local
nvme feature@livelist enabled local
nvme feature@device_rebuild enabled local
nvme feature@zstd_compress enabled local
nvme feature@draid enabled local
nvme feature@zilsaxattr disabled local
nvme feature@head_errlog disabled local
nvme feature@blake3 disabled local
nvme feature@block_cloning disabled local
nvme feature@vdev_zaps_v2 disabled local
df
将报告该安装点本身使用了多少存储空间,例如当前直接存储的文件/nvme
(不考虑快照或保留)。似乎那里只有一些用于其他安装点的子文件夹,因此很少使用128K
.USED
然而,列显示zfs list
了整个子树的使用情况,包括子树(和快照)。zpool upgrade
一般情况下可以安全运行;唯一的缺点是该池只能在支持所有激活功能的系统上使用,例如您可能无法使用系统的先前快照或救援映像打开它。