我只想将我的 nvme-ssd 挂载到/mnt/ssd-high-NVMe
$ sudo rm -rf /mnt/ssd-high-NVMe
$ sudo rm -rf /mnt/ssd-high-NVME
$ sudo mkdir /mnt/ssd-high-NVMe
$ sudo mkdir /mnt/ssd-high-NVME
$ ls -lh
drwxr-xr-x 2 root root 4.0K Jan 20 22:58 ssd-high-NVMe
drwxr-xr-x 2 root root 4.0K Jan 20 22:42 ssd-high-NVME
$ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVMe
$ df -h
tmpfs 6.3G 2.3M 6.3G 1% /run
/dev/sdb3 110G 45G 59G 44% /
tmpfs 32G 95M 32G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sdb2 512M 7.8M 505M 2% /boot/efi
tmpfs 6.3G 180K 6.3G 1% /run/user/1000
$ sudo dmesg
[43391.301050] EXT4-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: (null)
$ sudo e2fsck /dev/nvme1n1p1
e2fsck 1.45.6 (20-Mar-2020)
NVMe-SSD:No problem,22967/30531584 files,34978829/122096384 blocks
$ sudo nvme smart-log /dev/nvme1n1p1
Smart Log for NVME device:nvme1n1p1 namespace-id:ffffffff
critical_warning : 0
temperature : 35 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
endurance group critical warning summary: 0
data_units_read : 1,665,126
data_units_written : 2,815,185
host_read_commands : 53,190,654
host_write_commands : 83,501,433
controller_busy_time : 368
power_cycles : 27
power_on_hours : 25
unsafe_shutdowns : 11
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 35 C
Temperature Sensor 2 : 40 C
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
$ sudo vim /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda3 during installation
UUID=49b55adc-d909-470d-8a6b-87401c8ae63d / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda2 during installation
UUID=5624-9AA0 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
/dev/disk/by-uuid/6a4437ab-8812-484d-b799-4fd007593db4 /mnt/ssd-high-NVME auto rw,nosuid,nodev,relatime,uhelper=udisks2,x-gvfs-show 0 0
但是,当我将挂载点更改为另一个目录(将“ssd-high-NVMe”设置为“ssd-high-NVME”)时,一切正常。
$ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVME
$ df -h
tmpfs 6.3G 2.3M 6.3G 1% /run
/dev/sdb3 110G 45G 59G 44% /
tmpfs 32G 95M 32G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sdb2 512M 7.8M 505M 2% /boot/efi
tmpfs 6.3G 180K 6.3G 1% /run/user/1000
/dev/nvme1n1p1 458G 126G 312G 29% /mnt/ssd-high-NVME <------ SUCCESS!
有一件事可能很重要:我以前曾用作/mnt/ssd-high-NVMe
挂载点/dev/nvme1n1p1
,并且我在原始文件上做了一些坏事,/dev/nvme1n1p1
并在它仍然挂载时使其损坏。之后,我完全重新格式化/dev/nvme1n1p1
(我确信磁盘是健康的)。我认为我的问题与此有关。但是如何解决呢?我应该进一步提供什么信息?
谢谢!
附加信息
$ sudo gdisk -l /dev/nvme1n1
GPT fdisk (gdisk) version 1.0.5
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present
***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************
Disk /dev/nvme1n1: 976773168 sectors, 465.8 GiB
Model: Samsung SSD 980 PRO 500GB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 54BF3843-FF55-41C5-8FD5-25BF87B4DEEA
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 976773134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2029 sectors (1014.5 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 976773119 465.8 GiB 8300 Linux filesystem
TLDR;一个简单的
systemctl daemon-reload
后跟mount -a
应该解决这个问题。应该存在一个名为
mnt-ssd\x2dhigh\x2dNVME.mount
(来自您的路径)\x2d
的systemd 挂载-
,您可以检查它:这里重要的部分是
What
,它可能会显示旧磁盘的 UID。我假设在卸载旧 nvme 磁盘和安装新磁盘之间没有重新启动,因为服务应该在重新启动时重新生成。
问题是 systemd-mount - 出于某种我不知道的原因 - 似乎强制使用安装定义中定义的磁盘,即使使用了显式
mount DISK PATH
。在我的情况下,我最初能够安装新磁盘,并且只有在我(热)将旧磁盘与 vm 分离后,我才能再安装任何其他磁盘。当我分离旧磁盘时,它甚至会自动从挂载点卸载新磁盘。
我认为这是与手册的兼容性错误
(u-)mount
。Systemd 可能会看到旧磁盘被删除 - 它仍在 systemd-mount 中 - 将安装点标记为失败(或至少为非活动)并进行一些清理,包括确保不再在该路径上安装任何东西或类似的东西. 我不清楚之后无法安装另一个磁盘的原因。