我有一块 8TB 和两块 4TB 磁盘。我想知道我是否能够使用 8+4+4 磁盘创建 8TB raid?
我尝试过这个sudo mkfs.btrfs -f -m raid1 -d raid1 /dev/sdc1 /dev/sdd1 /dev/cdb1
但是这只能创建 4TB 的磁盘阵列。
从我的角度来看,从技术上来说,应该可以从两个 4TB 磁盘创建 8TB 条带,然后创建 8TB 镜像。这可以用btrfs
工具实现吗?
我有一块 8TB 和两块 4TB 磁盘。我想知道我是否能够使用 8+4+4 磁盘创建 8TB raid?
我尝试过这个sudo mkfs.btrfs -f -m raid1 -d raid1 /dev/sdc1 /dev/sdd1 /dev/cdb1
但是这只能创建 4TB 的磁盘阵列。
从我的角度来看,从技术上来说,应该可以从两个 4TB 磁盘创建 8TB 条带,然后创建 8TB 镜像。这可以用btrfs
工具实现吗?
btrfs.readthedocs.io 将“dup”配置文件描述为在单个“设备”上复制数据。该网站的多个位置的描述均未明确说明它们是指具有两个镜像分区的物理驱动器,还是隐藏了复制内容的分区。
一些关于“dup”的描述暗示它是一种特殊的 raid1,经过调整可在具有两个分区的单个设备上执行,但其他来源似乎认为它在单个分区内添加了重复。(当然,无论哪种方式,实际磁盘使用情况都是相同的。)在我看来,“dup”可能与单个分区兼容,但没有实际的重复,或者两个分区可能会强制非最佳性能。
这是用于备份而非主要访问的旋转硬盘(“dup” 防止比特腐烂。其他副本使用哈希值和校验和进行异地复制。)我知道许多 SDD 可能会在内部进行重复数据删除。
有人确切知道“dup”配置文件的行为吗?我的 C 语言阅读能力不够强,无法仔细阅读内核源文件。
救命!无法读取超级块
我像往常一样使用我的电脑(arch)运行 Android Studio,突然它损坏了,并要求重新启动 IDE,因为文件系统变为只读,我重新启动了整个电脑,现在我无法安装 btrfs 系统。使用最新的 LTS 内核。
我是这个方面的新手,我使用 btrfs 因为它已成为新的默认设置。
我该如何解决这个问题,请帮忙!到目前为止,我尝试过:
liveuser@localhost-live:~$ sudo btrfs rescue super-recover /dev/sdb3
All supers are valid, no need to recover
liveuser@localhost-live:~$ sudo btrfs rescue zero-log /dev/sdb3
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
WARNING: could not setup csum tree, skipping it
parent transid verify failed on 711655424 wanted 368940 found 368652
parent transid verify failed on 711655424 wanted 368940 found 368652
ERROR: could not open ctree
liveuser@localhost-live:~$ sudo btrfs scrub start /dev/sdb3
ERROR: '/dev/sdb3' is not a mounted btrfs device
liveuser@localhost-live:~$ sudo btrfs scrub status /dev/sdb3
ERROR: '/dev/sdb3' is not a mounted btrfs device
liveuser@localhost-live:~$ sudo mount -o usebackuproot /dev/sdb3 /mnt
mount: /mnt: fsconfig system call failed: File exists.
dmesg(1) may have more information after failed mount system call.
liveuser@localhost-live:~$ sudo btrfs check /dev/sdb3
Opening filesystem to check...
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
Ignoring transid failure
ERROR: root [7 0] level 0 does not match 2
ERROR: could not setup csum tree
ERROR: cannot open file system
也参与了救援
liveuser@localhost-live:~$ sudo btrfs rescue chunk-recover /dev/sdb3
Scanning: DONE in dev0
corrupt leaf: root=1 block=713392128 slot=0, unexpected item end, have 16283 expect 0
leaf free space ret -3574, leaf data size 0, used 3574 nritems 11
leaf 713392128 items 11 free space -3574 generation 368940 owner ROOT_TREE
leaf 713392128 flags 0x1(WRITTEN) backref revision 1
fs uuid 6d8d36ba-d266-4b34-88ad-4f81c383a521
chunk uuid 52ed2048-4a76-4a75-bb75-e1a118ec8118
ERROR: leaf 713392128 slot 0 pointer invalid, offset 15844 size 439 leaf data limit 0
ERROR: skip remaining slots
corrupt leaf: root=1 block=713392128 slot=0, unexpected item end, have 16283 expect 0
leaf free space ret -3574, leaf data size 0, used 3574 nritems 11
leaf 713392128 items 11 free space -3574 generation 368940 owner ROOT_TREE
leaf 713392128 flags 0x1(WRITTEN) backref revision 1
fs uuid 6d8d36ba-d266-4b34-88ad-4f81c383a521
chunk uuid 52ed2048-4a76-4a75-bb75-e1a118ec8118
ERROR: leaf 713392128 slot 0 pointer invalid, offset 15844 size 439 leaf data limit 0
ERROR: skip remaining slots
Couldn't read tree root
open with broken chunk error
并在救援后恢复
liveuser@localhost-live:~$ sudo btrfs restore /dev/sdb3 /dev/sda5
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
Ignoring transid failure
ERROR: root [7 0] level 0 does not match 2
WARNING: could not setup csum tree, skipping it
parent transid verify failed on 711655424 wanted 368940 found 368652
parent transid verify failed on 711655424 wanted 368940 found 368652
parent transid verify failed on 711655424 wanted 368940 found 368652
Ignoring transid failure
ERROR: root [5 0] level 0 does not match 2
Could not open root, trying backup super
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
parent transid verify failed on 711704576 wanted 368940 found 368652
Ignoring transid failure
ERROR: root [7 0] level 0 does not match 2
WARNING: could not setup csum tree, skipping it
parent transid verify failed on 711655424 wanted 368940 found 368652
parent transid verify failed on 711655424 wanted 368940 found 368652
parent transid verify failed on 711655424 wanted 368940 found 368652
Ignoring transid failure
ERROR: root [5 0] level 0 does not match 2
Could not open root, trying backup super
ERROR: superblock bytenr 274877906944 is larger than device size 209715200000
Could not open root, trying backup super
根据 smartctl 的检查,硬盘运行正常。没有重新分配的扇区,其他 ntfs/ext4 分区运行正常。
至少可以恢复数据。谢谢!
我很伤心,丢失了多年来努力积累的数据。我唯一有的备份是几个月前的,之后我做了很多修改。:'(
我正在考虑将数据驱动器上的文件系统从 ext4 更改为 btrfs,因为 btrfs 可以进行压缩,并且存储空间最终会耗尽。
我已经看到 btrfs 可以使用 zlib、lzo 和 zstd 进行压缩。
https://btrfs.readthedocs.io/en/latest/Compression.html
我怎样才能进行一些测试运行来查看压缩效果如何?
有什么方法可以将数据写入 /dev/null 或其他位置并计算它们在那里经过的字节数?
如何使用不同的压缩器(zlib、lzo 和 zstd)进行一些测试,而无需书写,但查看可以压缩多少?
输出blkid
显示了我的 BTRFS 卷上名为 UUID_SUB 的帖子。这意味着什么?我在哪里可以找到更多信息?
/dev/sdc: LABEL="example" UUID="e7c116be-e3ba-4097-857b-12a1f4e9f753" UUID_SUB="b263e0c0-1714-48f1-8706-f97dec03b355" BLOCK_SIZE="4096" TYPE="btrfs"
我使用以下命令创建了 BTRFS 交换文件:
$ btrfs filesystem mkswapfile -s 8G SwapFile
但是,交换仅使用 1GB 的交换文件,如以下命令序列所示:
$ du -csh SwapFile ; free
8.0G SwapFile
8.0G total
total used free shared buff/cache available
Mem: 12148108 5915736 1232604 743948 6132596 6232372
Swap: 1048572 35576 1012996
是否有一些配额或类似选项可以用来强制 Linux 交换机制使用我创建的所有 BTRFS 交换文件?
我正在虚拟机中使用 BTRFS。
我按照本指南确保我的/home/.snapshot
文件夹是它自己的子卷。一切似乎都正常,我可以拍摄快照、列出快照undochanges
等。
脚步:
细节:
我将现有快照发送到不同的子卷(设备)
sudo btrfs subvolume list -t /mnt_device3
ID gen top level path
-- --- --------- ----
256 9 5 @backup
sudo mkdir /backup/1
sudo btrfs send /home/.snapshots/1/snapshot | sudo btrfs receive /backup/1
sudo btrfs subvolume list -t /backup
ID gen top level path
-- --- --------- ----
256 17 5 @backup
258 18 256 1/snapshot
然后我使用 snapper 删除快照:
sudo snapper -c home delete 1
sudo snapper -c home list
# | Type | Pre # | Date | User | Cleanup | Description | Userdata
---+--------+-------+-----------------------------+------+---------+--------------------------------------------------+---------
0 | single | | | root | | current |
2 | single | | Tue 04 Jul 2023 03:20:01 PM | root | | testing send/receive after test.txt |
然后我再次使用发送/接收将快照放回去
sudo mkdir /home/.snapshots/1
sudo btrfs send /backup/1/snapshot | sudo btrfs receive /home/.snapshots/1
ls /home/.snapshots
total 0
drwxr-xr-x 1 root root 6 Jul 4 16:30 .
drwxr-xr-x 1 root root 32 Jun 29 11:36 ..
drwxr-xr-x 1 root root 16 Jul 4 16:11 1
drwxr-xr-x 1 root root 32 Jul 4 15:20 2
#an excerpt of sudo btrfs subvolume list /home
ID 311 gen 1147 top level 272 path 2/snapshot
ID 313 gen 1214 top level 272 path 1/snapshot
但 snapper 无法识别快照 1
sudo snapper -c home list
# | Type | Pre # | Date | User | Cleanup | Description | Userdata
---+--------+-------+-----------------------------+------+---------+--------------------------------------------------+---------
0 | single | | | root | | current |
2 | single | | Tue 04 Jul 2023 03:20:01 PM | root | | testing send/receive after test.txt |
所以我真的不能再使用 snapper 对快照 1 做任何事情了。
有没有办法让 snapper 识别导入的快照?
几年前,我使用这个优秀的指南https://www.youtube.com/watch?v=co5V2YmFVEE使用 LUKS 加密我的 Thinkpad 磁盘,并使用 BTRFS 作为我的文件系统。
当时我的 SSD 只有 256GB,现在我已经更新到 1TB,并使用 Clonezilla 将我的驱动器克隆到新的 SSD。唯一的问题仍然是 - 如何安全地扩展我的 LUKS 加密分区和其下的 BTRFS 系统(带有 2 个子卷 - root 和 home)?
我的/etc/fstab
# /dev/nvme0n1p1
UUID=6E39-1234 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/mapper/cryptroot
UUID=d7cf34c3-8fb4-4cbb-b04b-96e8121e11d9 / btrfs rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,subvolid=256,subvol=/@ 0 0
# /dev/mapper/cryptroot
UUID=d7cf34c3-8fb4-4cbb-b04b-96e8121e11d9 /home btrfs rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,subvolid=257,subvol=/@home 0 0
我删除了 4TB 磁盘上的所有文件/dev/sdb1
。这是通过rsync --delete
命令完成的。
在停止之前,rsync 写入了大约 10GB 的数据。
当然磁盘上没有快照。
文件btrfs
系统已挂载到该/home/user/Downloads
文件夹。
所以我想知道是否有办法恢复数据。
到目前为止(卸载磁盘后)我已经尝试过:
btrfs restore -i /dev/sdb1 /mnt/RESTORE/
其中只恢复了 10GB 的新文件。
./btrfs.sh /dev/sdb1 /home/user/Downloads/* /mnt/RESTORE/
结果 :
[...]
Trying root 3001138823168... (1096/1103)
Trying root 853360640... (1097/1103)
Trying root 50626560... (1098/1103)
Trying root 31309824... (1099/1103)
Trying root 31129600... (1100/1103)
Trying root 30900224... (1101/1103)
Trying root 30818304... (1102/1103)
Trying root 30408704... (1103/1103)
Didn't find 'home/user/D*/*
btrfs restore -t 3001556484096 /dev/sdb1 /mnt/RESTORE/
:parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
Ignoring transid failure
ERROR: root [1 0] level 0 does not match 1
Couldn't read tree root
Could not open root, trying backup super
parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
Ignoring transid failure
ERROR: root [1 0] level 0 does not match 1
Couldn't read tree root
Could not open root, trying backup super
parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
parent transid verify failed on 3001556484096 wanted 96918 found 96231
Ignoring transid failure
ERROR: root [1 0] level 0 does not match 1
Couldn't read tree root
Could not open root, trying backup super
btrfs-find-root -a /dev/sdb1
输出 :Superblock thinks the generation is 96918
Superblock thinks the level is 1
[...]
Well block 3001381945344(gen: 94646 level: 0) seems good, but generation/level doesn't match, want gen: 96918 level: 1
Well block 3001359089664(gen: 94635 level: 0) seems good, but generation/level doesn't match, want gen: 96918 level: 1
Well block 853360640(gen: 94238 level: 0) seems good, but generation/level doesn't match, want gen: 96918 level: 1
btrfs rescue super-recover -v /dev/sdb1
输出 :All Devices:
Device: id = 1, name = /dev/sdb1
Before Recovering:
[All good supers]:
device name = /dev/sdb1
superblock bytenr = 65536
device name = /dev/sdb1
superblock bytenr = 67108864
device name = /dev/sdb1
superblock bytenr = 274877906944
[All bad supers]:
All supers are valid, no need to recover
所以任何帮助将不胜感激:)
更新 1:
不幸的是,我只能恢复少数损坏的文件。这是我在btrfs-undelete 脚本之后所做的。
/tmp/ID
文件中:btrfs-find-root -a /dev/sdb1 2>&1 | grep ^Well | sed -r -e 's/Well block ([0-9]+).*/\1/' | sort -rn > /tmp/ID
for i in $(cat /tmp/ID) ; do mkdir /mnt/RESTORE/"$i"; btrfs restore -o -iv -t "$i" /dev/sdb1 /mnt/RESTORE/"$i" 2>&1; done
我想现在唯一的办法就是找一个文件恢复软件...