我在 Hetzner 有一些专用的根服务器,它们使用vSwitch通过 VLAN 连接。现在我想知道您是否可以将Hetzner Cloud中的服务器连接到同一个 vSwitch,以便它们可以通过 VLAN 进行通信?
papanito's questions
我为我的网络接口配置了一个 vlan。自上次重新启动以来,这不再出现,如果我尝试手动启动它,它会失败
sudo ifup eno1.4000
RTNETLINK answers: File exists
ifup: failed to bring up eno1
ifup: could not bring up parent interface eno1
我的操作系统是 debian 10,这是我的配置:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto eno1
iface eno1 inet static
address xxx.xxx.xxx.xxx
netmask 255.255.255.224
gateway xxx.xxx.xxx.xxx
up route add -net xxx.xxx.xxx netmask 255.255.255.224 gw xxx.xxx.xxx.xxx dev eno1
iface eno1 inet6 static
address xxxx:xxxx:xxxx:xxxx::x
netmask 64
gateway fe80::1
auto eno1.4000
iface eno1.4000 inet static
address 192.168.100.4
netmask 255.255.255.0
vlan-raw-device eno1
mtu 1400a
引导日志显示
Aug 19 17:08:50 xxxx ifup[820]: ifup: failed to bring up eno1
Aug 19 17:08:50 xxxx ifup[820]: RTNETLINK answers: File exists
Aug 19 17:08:50 xxxx ifup[820]: ifup: failed to bring up eno1
Aug 19 17:08:50 xxxx ifup[820]: ifup: could not bring up parent interface eno1
在设置 GlusterFS 时,对砖块有什么要求,即 gluster 服务器上的目录
- 权限
- 所有权
我知道 gluster完全符合 POSIX 标准,但我想知道如何配置每个服务器上的砖块,以便 gluster 客户端可以使用 gluster 卷。目前我的砖配置如下:
# ls -l /data/gluster/
total 0
drwxr-xr-x 7 root root 86 Dec 30 19:54 brick1
更新:升级到最新版本 5.2,从而更新日志。但是,问题保持不变 更新 2:也将客户端更新到 5.2,仍然是相同的问题。
我有一个具有 3 个节点的 gluster 集群设置。
- 服务器1,192.168.100.1
- 服务器2,192.168.100.2
- 服务器3,192.168.100.3
它们通过内部网络 192.160.100.0/24 连接。但是,我想使用其中一台服务器的公共 ip从网络外部连接客户端,但这不起作用:
sudo mount -t glusterfs x.x.x.x:/datavol /mnt/gluster/
在日志中给出类似的内容:
[2018-12-15 17:57:29.666819] I [fuse-bridge.c:4153:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-12-15 18:23:47.892343] I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-12-15 18:23:47.892375] I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0
[2018-12-15 18:23:47.892475] I [MSGID: 108006] [afr-common.c:5650:afr_local_init] 0-datavol-replicate-0: no subvolumes up
[2018-12-15 18:23:47.892533] E [fuse-bridge.c:4328:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2018-12-15 18:23:47.892651] W [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)
[2018-12-15 18:23:47.892668] W [fuse-bridge.c:3250:fuse_statfs_resume] 0-glusterfs-fuse: 2: STATFS (00000000-0000-0000-0000-000000000001) resolution fail
[2018-12-15 18:23:47.892773] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 3: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.894204] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 4: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.894367] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 5: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.916333] I [fuse-bridge.c:5134:fuse_thread_proc] 0-fuse: initating unmount of /mnt/gluster
The message "I [MSGID: 108006] [afr-common.c:5650:afr_local_init] 0-datavol-replicate-0: no subvolumes up" repeated 4 times between [2018-12-15 18:23:47.892475] and [2018-12-15 18:23:47.894347]
[2018-12-15 18:23:47.916555] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494) [0x7f90f2306494] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x5591a51e87ed] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x5591a51e8644] ) 0-: received signum (15), shutting down
[2018-12-15 18:23:47.916573] I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/mnt/gluster'.
[2018-12-15 18:23:47.916582] I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/mnt/gluster'.
我能看到的是
0-datavol-replicate-0: no subvolumes up
和
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)
防火墙端口(24007-24008、49152-49156)在公共网络接口上打开。
gluster 卷修复数据卷信息
Brick 192.168.100.1:/data/gluster/brick1
Status: Connected
Number of entries: 0
Brick 192.168.100.2:/data/gluster/brick1
Status: Connected
Number of entries: 0
Brick 192.168.100.3:/data/gluster/brick1
Status: Connected
Number of entries: 0
集群信息:
1: volume datavol-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host 192.168.100.1
5: option remote-subvolume /data/gluster/brick1
6: option transport-type socket
7: option transport.address-family inet
8: option send-gids true
9: end-volume
10:
11: volume datavol-client-1
12: type protocol/client
13: option ping-timeout 42
14: option remote-host 192.168.100.2
15: option remote-subvolume /data/gluster/brick1
16: option transport-type socket
17: option transport.address-family inet
18: option send-gids true
19: end-volume
20:
21: volume datavol-client-2
22: type protocol/client
23: option ping-timeout 42
24: option remote-host 192.168.100.3
25: option remote-subvolume /data/gluster/brick1
26: option transport-type socket
27: option transport.address-family inet
28: option send-gids true
29: end-volume
30:
31: volume datavol-replicate-0
32: type cluster/replicate
33: subvolumes datavol-client-0 datavol-client-1 datavol-client-2
34: end-volume
35:
36: volume datavol-dht
37: type cluster/distribute
38: option lock-migration off
39: subvolumes datavol-replicate-0
40: end-volume
41:
42: volume datavol-write-behind
43: type performance/write-behind
44: subvolumes datavol-dht
45: end-volume
46:
47: volume datavol-read-ahead
48: type performance/read-ahead
49: subvolumes datavol-write-behind
50: end-volume
51:
52: volume datavol-readdir-ahead
53: type performance/readdir-ahead
54: subvolumes datavol-read-ahead
55: end-volume
56:
57: volume datavol-io-cache
58: type performance/io-cache
59: subvolumes datavol-readdir-ahead
60: end-volume
61:
62: volume datavol-quick-read
63: type performance/quick-read
64: subvolumes datavol-io-cache
65: end-volume
66:
67: volume datavol-open-behind
68: type performance/open-behind
69: subvolumes datavol-quick-read
70: end-volume
71:
72: volume datavol-md-cache
73: type performance/md-cache
74: subvolumes datavol-open-behind
75: end-volume
76:
77: volume datavol
78: type debug/io-stats
79: option log-level INFO
80: option latency-measurement off
81: option count-fop-hits off
82: subvolumes datavol-md-cache
83: end-volume
84:
85: volume meta-autoload
86: type meta
87: subvolumes datavol
88: end-volume
gluster 对等状态:
root@server1 /data # gluster peer status
Number of Peers: 2
Hostname: 192.168.100.2
Uuid: 0cb2383e-906d-4ca6-97ed-291b04b4fd10
State: Peer in Cluster (Connected)
Hostname: 192.168.100.3
Uuid: d2d9e82f-2fb6-4f27-8fd0-08aaa8409fa9
State: Peer in Cluster (Connected)
gluster 卷状态
Status of volume: datavol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.100.1:/data/gluster/brick1 49152 0 Y 13519
Brick 192.168.100.2:/data/gluster/brick1 49152 0 Y 30943
Brick 192.168.100.3:/data/gluster/brick1 49152 0 Y 24616
Self-heal Daemon on localhost N/A N/A Y 3282
Self-heal Daemon on 192.168.100.2 N/A N/A Y 18987
Self-heal Daemon on 192.168.100.3 N/A N/A Y 24638
Task Status of Volume datavol
我想念什么?
我已经设置了一个私有 Bitbucket 存储库,Jenkins Git-Plugin 可以很好地访问它。当我将 repo 切换为需要身份验证时,Jenkins 抱怨
stderr: fatal: Authentication failed for 'https://bitbucket/scm/test/test.git'
我在插件中提供了具有足够权限来访问存储库的凭据。我也可以在 jenkins 服务器的命令行上成功测试。
Jenkins 中的存储库 URL:https://bitbucket/scm/test/test.git
如果我直接在存储库 url 中添加凭据,则身份验证工作正常。
https://testuser:pa $$@bitbucket/scm/test/test.git
我目前不明白凭据是如何在 jenkins 插件中传递的,所以 gitconfig 可能有问题?这是我的
[credential]
helper = store
[core]
editor = nano.exe
askpass = false
任何建议如何配置 git 和 jenkins 以便我可以通过 https 使用用户身份验证(用户名和密码)?