是否可以告诉dracut在创建initramfs/etc/dracut.conf.d/
时不要使用配置文件?/usr/lib/dracut/dracut.conf.d
在为另一个系统创建initramfs之前,我将配置文件移动到临时目录,并在创建后将其放回。我在有关dracutman
的页面中没有看到任何选项。也许我没有那么小心。
我有成功配置 iSCSI 和多路径的 Debian 9:
# multipath -ll /dev/mapper/mpathb
mpathb (222c60001556480c6) dm-2 Promise,Vess R2600xi
size=10T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 12:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 13:0:0:0 sdd 8:48 active ready running
/dev/mapper/mpathb
是 LVM 组的一部分vg-one-100
:
# pvs
PV VG Fmt Attr PSize PFree
/dev/dm-2 vg-one-100 lvm2 a-- 10,00t 3,77t
# vgs
VG #PV #LV #SN Attr VSize VFree
vg-one-100 1 17 0 wz--n- 10,00t 3,77t
vg-one-100
组包含几卷:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv-one-0-1 vg-one-100 -wi-a----- 20,00g
lv-one-1-0 vg-one-100 -wi-a----- 2,41g
lv-one-10-0 vg-one-100 -wi------- 20,00g
lv-one-11-0 vg-one-100 -wi------- 30,00g
lv-one-12-0 vg-one-100 -wi------- 2,41g
lv-one-13-0 vg-one-100 -wi------- 2,41g
lv-one-14-0 vg-one-100 -wi------- 2,41g
lv-one-15-0 vg-one-100 -wi------- 2,41g
lv-one-16-0 vg-one-100 -wi------- 2,41g
lv-one-17-0 vg-one-100 -wi------- 30,00g
lv-one-18-0 vg-one-100 -wi------- 30,00g
lv-one-23-0 vg-one-100 -wi------- 20,00g
lv-one-31-0 vg-one-100 -wi------- 20,00g
lv-one-8-0 vg-one-100 -wi------- 30,00g
lv-one-9-0 vg-one-100 -wi------- 20,00g
lvm_images vg-one-100 -wi-a----- 5,00t
lvm_system vg-one-100 -wi-a----- 1,00t
我的lvm.conf
包括下一个过滤器:
# grep filter /etc/lvm/lvm.conf | grep -vE '^.*#'
filter = ["a|/dev/dm-*|", "r|.*|"]
global_filter = ["a|/dev/dm-*|", "r|.*|"]
lvmetad
被禁用:
# grep use_lvmetad /etc/lvm/lvm.conf | grep -vE '^.*#'
use_lvmetad = 0
如果lvmetad
禁用,lvm2-activation-generator
则将使用。
就我而言lvm2-activation-generator
,生成了所有需要的单元文件并在引导期间执行它:
# ls -1 /var/run/systemd/generator/lvm2-activation*
/var/run/systemd/generator/lvm2-activation-early.service
/var/run/systemd/generator/lvm2-activation-net.service
/var/run/systemd/generator/lvm2-activation.service
# systemctl status lvm2-activation-early.service
● lvm2-activation-early.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 897 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation-net.service
● lvm2-activation-net.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:21:24 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 1537 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
lvm[1537]: 4 logical volume(s) in volume group "vg-one-100" now active
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation.service
● lvm2-activation.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 900 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
问题在于:我无法在引导期间自动激活所有 LVM 卷,因为在通过 iSCSI 而不是多路径设备(片段)lvm2-activator-net.service
附加(登录)后激活卷:journalctl
. . .
kernel: sd 11:0:0:0: [sdc] 21474836480 512-byte logical blocks: (11.0 TB/10.0 TiB)
kernel: sd 10:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 11:0:0:0: [sdc] Write Protect is off
kernel: sd 11:0:0:0: [sdc] Mode Sense: 97 00 10 08
kernel: sd 11:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 10:0:0:0: [sdb] Attached SCSI disk
kernel: sd 11:0:0:0: [sdc] Attached SCSI disk
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] (multiple)
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] (multiple)
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] successful.
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] successful.
systemd[1]: Started Login to default iSCSI targets.
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Starting Activation of LVM2 logical volumes...
multipathd[884]: sdb: add path (uevent)
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Reached target Remote File Systems (Pre).
systemd[1]: Mounting /var/lib/one/datastores/101...
systemd[1]: Mounting /var/lib/one/datastores/100...
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 1 1 service-time 0 1 1 8:16 1]
multipathd[884]: mpathb: event checker started
multipathd[884]: sdb [8:16]: path added to devmap mpathb
multipathd[884]: sdc: add path (uevent)
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 2 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:32 1]
. . .
启动条件lvm2-activation-net.service
正确:
# grep After /var/run/systemd/generator/lvm2-activation-net.service
After=lvm2-activation.service iscsi.service fcoe.service
如何all
在引导期间正确激活逻辑卷?
我已经尝试在测试环境中配置 OpenNebula,由两台主机组成:
星云机包含以下内容:
root@nebula:/var/lib/one/datastores# onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
0 system - - 0 0 sys - ssh on
1 default 39.1G 70% 0 4 img fs ssh on
2 files 39.1G 70% 0 0 fil fs ssh on
100 images_shared 39.1G 70% 0 2 img fs shared on
104 lvm_system 39.1G 76% 0 0 sys - fs_lvm on
105 lvm_images 39.1G 70% 0 1 img fs fs_lvm on
106 lvm_system2 39.1G 76% 0 0 sys - fs_lvm on
root@nebula:/var/lib/one/datastores# ls /var/lib/one/datastores/
0 1 100 101 105 2
root@nebula:/var/lib/one/datastores# showmount -e
Export list for nebula:
/var/lib/one/datastores/105 192.168.122.0/24
/var/lib/one/datastores/100 192.168.122.0/24
kvm-node-1机器包含以下内容:
root@kvm-node-1:/var/lib/one/datastores# ls /var/lib/one/datastores/
0 100 104 105 106
root@kvm-node-1:/var/lib/one/datastores# mount|grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.122.240:/var/lib/one/datastores/100 on /var/lib/one/datastores/100 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
192.168.122.240:/var/lib/one/datastores/105 on /var/lib/one/datastores/105 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
root@kvm-node-1:/var/lib/one/datastores# vgs
VG #PV #LV #SN Attr VSize VFree
vg-one-0 1 1 0 wz--n- <10,00g <9,98g
我可以通过 Sunstone 将带有映像的 VM 部署到虚拟机管理程序。此映像已成功启动。但由于错误,我无法终止 VM:
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Command execution failed (exit code: 5): /var/lib/one/remotes/tm/fs_lvm/delete nebula:/var/lib/one//datastores/0/29/disk.0 29 105
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 delete: Command " set -x
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 DEV=$(readlink /var/lib/one/datastores/0/29/disk.0)
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -d "/var/lib/one/datastores/0/29/disk.0" ]; then
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -rf "/var/lib/one/datastores/0/29/disk.0"
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 else
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -z "$DEV" ]; then
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 exit 0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if echo "$DEV" | grep "^/dev/" &>/dev/null; then
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 sudo lvremove -f $DEV
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi" failed: ++ readlink /var/lib/one/datastores/0/29/disk.0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + DEV=/dev/vg-one-0/lv-one-29-0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -d /var/lib/one/datastores/0/29/disk.0 ']'
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -z /dev/vg-one-0/lv-one-29-0 ']'
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + echo /dev/vg-one-0/lv-one-29-0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + grep '^/dev/'
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + sudo lvremove -f /dev/vg-one-0/lv-one-29-0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Volume group "vg-one-0" not found
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Cannot process volume group vg-one-0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 Error deleting /var/lib/one/datastores/0/29/disk.0
Fri Nov 9 16:04:55 2018 [Z0][TM][D]: Message received: TRANSFER FAILURE 29 Error deleting /var/lib/one/datastores/0/29/disk.0
我应该如何使用 LVM 数据存储组织前端机器和管理程序机器之间的交换来解决这个问题?
我已经覆盖了文件中dh_auto_install:
的目标。debian/rule
现在,我所有构建的组件都安装在debian/tmp
.
我package.install
为我的套装的每个包裹都做了准备,但我有这个问题:
包A
应该包含scripts
两个文件。
包B
也应该包含scripts
dir 但没有两个将在 package 中的文件A
。
当然可以在文件中的scripts
dir 中设置debian/B.install
文件。但是scripts
dir 包含大量文件,并且放置每个文件都需要更多时间。
是否可以排除文件中的某些特定package.install
文件?
我有这个数组:
PARAMETERS_OF_COMPONENTS[1]="component1"
PARAMETERS_OF_COMPONENTS[2]="component21 component22 component23"
PARAMETERS_OF_COMPONENTS[3]="component3"
PARAMETERS_OF_COMPONENTS[4]="component41 component42 component43"
我想将该数组传递给这个函数:
foo()
{
local param1="$1"
local param2="$2"
local array_param="$3"
. . .
echo "${PARAMETERS_OF_COMPONENTS[@]}"
}
当我以这种方式传递数组时:
foo "$param1" "$param2" "${PARAMETER_OF_COMPONENTS[@]}"
然后函数只打印:
component1
另外,我已经尝试了几种其他方法来传递数组,但我仍然没有找到正确的解决方案。
如何正确传递数组以发挥作用?此外,解决方案必须与 Dash 兼容(至少没有 bashisms)。
UPD @Kusalananda 向我解释说 Dash 不支持数组。谢谢你的澄清。
我会问另一个。如何将许多参数传递给函数而不直接以方式传递$1
,$2
......并且不涉及全局变量?我有几个想法,但我想听听你的方法吗?
我的任务是将函数从一个文件移动到另一个文件,但该函数使用全局变量。我不想使用全局变量。那么如何以最正确的方式做到这一点呢?