AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[ovh](server)

Martin Hope
Samuel Hackwill
Asked: 2022-03-08 04:06:05 +0800 CST

便宜的 VPS 上的 250Mbs 是否足以让 500 个 CCU 收听广播流?[复制]

  • 0
这个问题在这里已经有了答案:
您如何为网站进行负载测试和容量规划? (5 个回答)
6 个月前关闭。

我想使用由法国 OVH 托管的廉价 VPS(1 个 vCore、2 GB RAM、40 GB SSD NVMe、250 Mbps 不限速)来托管一个将用于本月活动的 icecast 服务器。将有多达 500 个 CCU 监听 128 kbps 音频流。

根据我对本文的阅读,在我看来 250 Mbps 应该足以响应负载,但我没有任何管理此类问题的经验。

我的理由是 128kb*500CCU + 10% 开销 = 大约 70 Mb/s。

我还想知道 OVH 提供的 250 Mbps 不计量是否得到保证,或者使用该机器的其他客户端托管的其他服务的负载是否会对性能产生影响。(我已经问过OVH,但他们并不是特别有帮助)

谢谢你的见解!塞缪尔

更新

我已经使用上面链接中描述的脚本设置了负载测试场景。

#!/bin/sh
#

# max concurrent curls to kick off
max=600
# how long to sleep between each curl, can be decimal  0.5
delay=1
# how long to stay connected (in seconds)
duration=1800
# url to request from
URL=<theURL>

echo "Start load test"

while /bin/true
do
count=0
while [ $count -le $max ]
do  
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   curl -m $duration --silent --output /dev/null "$URL" &
   [ "$delay" != "" ] && sleep $delay
   let count=$count+10
   echo "Added 10 clients, now at $count clients"
done
wait
done

在 VPS1(“客户端”机器)上启动脚本之前,我在 VPS2(“服务器”机器,icecast2 服务器所在的位置)上的网络接口上使用 slurm 打开了一个窗口来监控网络使用情况,如下所示:

slurm -i eth0

我还打开了一个窗口来监控 icecast 的 CPU 使用情况(在 VPS2 上),如下所示:

top -p <PID OF ICECAST>

并在收听广播流时启动脚本。一切都很顺利,我没有听到任何故障,而且 CPU 使用率(600 CCU 时为 6%)非常合理(网络使用率也比我预期的要低得多,峰值使用率是 17MB),所以我猜我的设置通过负载测试!

谢谢您的帮助。

vps ovh icecast
  • 1 个回答
  • 75 Views
Martin Hope
Patryk Chowratowicz 'Zszywacz'
Asked: 2022-01-27 09:19:54 +0800 CST

如何仅使用 DNS 记录将域 A 重定向到域 B 而无需托管?

  • 2

我想重定向domain A(没有托管空间)到domain B(301),但是当我尝试https://domainA.com或https://www.domainA.com时,它以ERR_CONNECTION_REFUSED. 甚至可以只使用 DNS 记录吗?

我正在使用 OVH.com 重定向面板,并且 http:// 重定向工作正常。

hosting https ovh 301-redirect redirection
  • 4 个回答
  • 556 Views
Martin Hope
Gilberto Martins
Asked: 2022-01-26 11:14:23 +0800 CST

使用OVH VRack,2个PVE无法完全通信

  • 2

在 OVH 中,我有 2 个 ProxMox 服务器,每个服务器都包含一个防火墙和一些其他主机。我正在尝试使用 OVH vRack 进行他们之间的私人通信,但它不起作用。

以下是我的网络摘要:

虚拟机架配置

目标是从 PRD2FRM201 访问 PRD1FRM206,反之亦然。

主机

  • PRD1FRM206 - PVE01 服务器中的主机
  • PRD1FWL100 - PVE01 服务器中的防火墙
  • PRD2FRM201 - PVE02 服务器中的主机
  • PRD2FWL100 - PVE02 服务器中的防火墙
  • PVE01 和 PVE02 - ProxMox 专用服务器,均托管在 OVH 中,由 OVH VRack 互连

PVE01 网络配置:

# Server pag-01
# network interfaces
#
# Author:       Gilberto Martins
# Creation:     03/19/2021
# ================================
    auto lo
    iface lo inet loopback

    auto enp5s0f0
    iface enp5s0f0 inet manual
    auto enp5s0f1
    iface enp5s0f1 inet manual

    # Internet Interface
    auto vmbr0
    iface vmbr0 inet dhcp
      # Internet Interface
      bridge-ports enp5s0f0
      bridge-stp off
      bridge-fd 0

    # Tools Network
    auto vmbr1
    iface vmbr1 inet manual
      # Rede Tools - 172.21.10.0/27
      bridge-ports dummy1
      bridge-stp off
      bridge-fd 0

    # WebPRD Network
    auto vmbr2
    iface vmbr2 inet manual
      # Rede WebPRD - 172.21.20.0/27
      bridge-ports dummy2
      bridge-stp off
      bridge-fd 0

    # WebHML Network
    auto vmbr3
    iface vmbr3 inet manual
      # Rede WebHML - 172.21.30.0/27
      bridge-ports dummy3
      bridge-stp off
      bridge-fd 0

    # Interface PrivateNetwork
#    auto vmbr4
#    iface vmbr4 inet static
      # Rede VRack - NAO USAR
#      address 192.168.0.10/31
#      bridge-ports enp5s0f1
#      bridge-stp off
#      bridge-fd 0

    # WebSites Network
    auto vmbr5
    iface vmbr5 inet manual
      # Rede WebSites - 172.21.40.0/27
      bridge-ports dummy4
      bridge-stp off
      bridge-fd 0

PVE01当前接口:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether KK:KK:KK:KK:KK:KK brd ff:ff:ff:ff:ff:ff
3: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr4 state UP group default qlen 1000
    link/ether YY:YY:YY:YY:YY:YY brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether UU:UU:UU:UU:UU:UU brd ff:ff:ff:ff:ff:ff
    inet 9.9.9.9/24 brd 9.9.9.255 scope global dynamic vmbr0
       valid_lft 56089sec preferred_lft 56089sec
    inet6 zz99::zz22:zzbb:zzhh:zzkk/64 scope link 
       valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:30:fb:a2:d2:f1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30c0:14ff:fea4:abfd/64 scope link 
       valid_lft forever preferred_lft forever
6: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 96:b3:67:f5:c3:cd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a849:97ff:fe6c:14e9/64 scope link 
       valid_lft forever preferred_lft forever
7: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:99:bd:90:12:24 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e033:5fff:fe6d:222a/64 scope link 
       valid_lft forever preferred_lft forever
8: vmbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether AA:AA:AA:AA:AA:AA brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a242:3fff:fe47:3cfb/64 scope link 
       valid_lft forever preferred_lft forever
9: tap201i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 2a:30:fb:a2:d2:f1 brd ff:ff:ff:ff:ff:ff
10: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 1a:61:72:52:5b:a0 brd ff:ff:ff:ff:ff:ff
11: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 56:16:5b:14:ce:e3 brd ff:ff:ff:ff:ff:ff
12: tap100i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 96:b3:67:f5:c3:cd brd ff:ff:ff:ff:ff:ff
13: tap100i3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether 5e:99:bd:90:12:24 brd ff:ff:ff:ff:ff:ff
14: tap100i4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr4 state UNKNOWN group default qlen 1000
    link/ether ae:84:54:57:7f:46 brd ff:ff:ff:ff:ff:ff
15: tap203i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether aa:dd:66:e9:fd:74 brd ff:ff:ff:ff:ff:ff
17: tap204i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ce:6b:9e:cb:ca:25 brd ff:ff:ff:ff:ff:ff
18: tap205i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether f2:76:a3:12:48:da brd ff:ff:ff:ff:ff:ff
19: tap206i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether be:92:f0:2e:54:2b brd ff:ff:ff:ff:ff:ff
21: tap402i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 5a:4b:71:1c:b1:6e brd ff:ff:ff:ff:ff:ff
22: tap403i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether ba:0a:25:76:01:6e brd ff:ff:ff:ff:ff:ff
23: tap301i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether 9e:2c:dd:7b:fb:8a brd ff:ff:ff:ff:ff:ff
24: tap302i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether 6e:50:73:30:67:ae brd ff:ff:ff:ff:ff:ff
25: tap303i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether ae:96:60:a4:bc:21 brd ff:ff:ff:ff:ff:ff
26: veth900i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:92:fa:19:f1:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
29: tap304i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether f2:14:af:70:17:42 brd ff:ff:ff:ff:ff:ff
31: tap404i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 8e:3e:76:76:fb:29 brd ff:ff:ff:ff:ff:ff
32: tap401i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether e2:af:68:37:ed:7e brd ff:ff:ff:ff:ff:ff
33: dummy4: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr5 state UNKNOWN group default qlen 1000
    link/ether c2:7e:27:1c:0c:af brd ff:ff:ff:ff:ff:ff
34: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:7e:27:1c:0c:af brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c07e:27ff:fe1c:caf/64 scope link 
       valid_lft forever preferred_lft forever
35: tap100i5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr5 state UNKNOWN group default qlen 1000
    link/ether 92:cb:02:fe:5f:86 brd ff:ff:ff:ff:ff:ff
42: tap501i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr5 state UNKNOWN group default qlen 1000
    link/ether 8a:80:41:55:95:0c brd ff:ff:ff:ff:ff:ff
49: tap202i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether c6:2e:7c:40:b8:02 brd ff:ff:ff:ff:ff:ff

PVE02 网络配置:

# Server pag-02
# network interfaces
#
# Author:       Gilberto Martins
# Creation:     06/08/2021
# ================================

    auto lo
    iface lo inet loopback
    auto eno1
    iface eno1 inet manual
    auto eno2
    iface eno2 inet manual
    
    # Internet Interface 
    auto vmbr0
    iface vmbr0 inet dhcp
      # Interface externa - NAO USAR
      bridge-ports eno1
      bridge-stp off
      bridge-fd 0
    
    # Tools Network
    auto vmbr1
    iface vmbr1 inet manual
      # Tools Network - 172.22.10.0/27
      bridge-ports dummy1
      bridge-stp off
      bridge-fd 0
    
    # DataBase Network
    auto vmbr2
    iface vmbr2 inet manual
      # DataBase Network - 172.22.20.0/27
      bridge-ports dummy2
      bridge-stp off
      bridge-fd 0

    # VRack Network
#    auto vmbr3
#    iface vmbr3 inet static
      # VRack Network
#      address 192.168.0.11/31
#      bridge-ports eno2
#      bridge-stp off
#      bridge-fd 0

PVE02当前接口:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether d0:50:99:fb:24:13 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr3 state UP group default qlen 1000
    link/ether d0:50:99:fb:24:12 brd ff:ff:ff:ff:ff:ff
4: enp0s20f0u8u3c2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 26:fc:24:e9:66:dc brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether RR:RR:RR:RR:RR:RR brd ff:ff:ff:ff:ff:ff
    inet 4.4.4.4/24 brd 4.4.4.255 scope global dynamic vmbr0
       valid_lft 73446sec preferred_lft 73446sec
    inet6 fe80::d250:99ff:fefb:2413/64 scope link 
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:32:c1:5c:c7:77 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ccf5:5bff:fead:bf80/64 scope link 
       valid_lft forever preferred_lft forever
7: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 46:c7:8c:94:01:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::58d2:51ff:fe31:6516/64 scope link 
       valid_lft forever preferred_lft forever
8: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:50:99:fb:24:12 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d250:99ff:fefb:2412/64 scope link 
       valid_lft forever preferred_lft forever
13: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 9a:de:c5:ba:40:80 brd ff:ff:ff:ff:ff:ff
14: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ba:32:c1:5c:c7:77 brd ff:ff:ff:ff:ff:ff
15: tap100i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 46:c7:8c:94:01:4b brd ff:ff:ff:ff:ff:ff
16: tap100i3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UNKNOWN group default qlen 1000
    link/ether a2:e9:f1:ba:f1:a9 brd ff:ff:ff:ff:ff:ff
17: tap301i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 66:ba:b1:22:e8:22 brd ff:ff:ff:ff:ff:ff
18: tap302i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether e2:f8:74:ad:e4:77 brd ff:ff:ff:ff:ff:ff
19: tap303i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 3e:b1:f0:42:8d:75 brd ff:ff:ff:ff:ff:ff
20: tap304i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether 52:7a:ec:b5:46:4b brd ff:ff:ff:ff:ff:ff
21: veth201i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr201i0 state UP group default qlen 1000
    link/ether fe:0c:f2:09:62:fe brd ff:ff:ff:ff:ff:ff link-netnsid 0
22: fwbr201i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:8d:06:38:c5 brd ff:ff:ff:ff:ff:ff
23: fwpr201p0@fwln201i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether 52:58:a1:6d:db:00 brd ff:ff:ff:ff:ff:ff
24: fwln201i0@fwpr201p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr201i0 state UP group default qlen 1000
    link/ether ae:fd:8d:06:38:c5 brd ff:ff:ff:ff:ff:ff

PRD1FWL100 网络配置:

# This is the network config written by 'subiquity'
#
# Author:       Gilberto Martins
# Modified:     03/19/2021
# ===============================

network:
  ethernets:
    # External IP
    ens18:
      # IP and Gateway have been intentionally changed
      addresses:
      - 1.1.1.1/32
      gateway4: 1.1.1.254
      # OVH mandatory routes
      routes:
      - to: 1.1.1.154/32
        via: 1.1.1.1
      - to: 0.0.0.0/0
        via: 1.1.1.1
      nameservers:
        addresses:
          - 172.21.10.2
        search:
          - kprd1
    # Tools Network
    ens19:
      addresses:
      - 172.21.10.1/27
    # WebPrd Network
    ens20:
      addresses:
      - 172.21.20.1/27
    # WebHml Network
    ens21:
      addresses:
      - 172.21.30.1/27
    # Vrack Network (RFC 3021)
    ens22:
      addresses:
      - 172.30.0.0/31
      routes:
        # Tools network at kprd2
      - to: 172.22.10.0/27
        via: 172.30.0.0
        # Database network at kprd2
      - to: 172.22.20.0/27
        via: 172.30.0.0
        # VRack <-> VRack 
      - to: 172.30.0.1
        via: 172.30.0.0
    # WebServer Network
    ens23:
      addresses:
      - 172.21.50.1/27
  version: 2

PRD1FWL100当前接口:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether XS:XS:XS:XS:XS:XS brd ff:ff:ff:ff:ff:ff
    inet 9.9.9.9/32 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe41:b0ec/64 scope link 
       valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 22:a9:69:cd:9a:08 brd ff:ff:ff:ff:ff:ff
    inet 172.21.10.1/27 brd 172.21.10.31 scope global ens19
       valid_lft forever preferred_lft forever
    inet6 fe80::20a9:69ff:fecd:9a08/64 scope link 
       valid_lft forever preferred_lft forever
4: ens20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 96:c5:9a:8e:13:0d brd ff:ff:ff:ff:ff:ff
    inet 172.21.20.1/27 brd 172.21.20.31 scope global ens20
       valid_lft forever preferred_lft forever
    inet6 fe80::94c5:9aff:fe8e:130d/64 scope link 
       valid_lft forever preferred_lft forever
5: ens21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 36:b2:5a:cc:a4:91 brd ff:ff:ff:ff:ff:ff
    inet 172.21.30.1/27 brd 172.21.30.31 scope global ens21
       valid_lft forever preferred_lft forever
    inet6 fe80::34b2:5aff:fecc:a491/64 scope link 
       valid_lft forever preferred_lft forever
6: ens22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 92:5b:ab:3c:75:2f brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.0/31 scope global ens22
       valid_lft forever preferred_lft forever
    inet6 fe80::905b:abff:fe3c:752f/64 scope link 
       valid_lft forever preferred_lft forever
7: ens23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 9a:a2:c1:97:59:54 brd ff:ff:ff:ff:ff:ff
    inet 172.21.50.1/27 brd 172.21.50.31 scope global ens23
       valid_lft forever preferred_lft forever
    inet6 fe80::98a2:c1ff:fe97:5954/64 scope link 
       valid_lft forever preferred_lft forever
8: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.10.1.1/29 brd 10.10.1.7 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::ece8:6abc:f8bd:d5f4/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

PRD1FWL100 当前路由表

注意:外部地址已被隐藏

user@prd1fwl100:~$ ip route 
default via 9.9.9.9 dev ens18 proto static 
10.10.1.0/29 dev tun0 proto kernel scope link src 10.10.1.1 
9.9.9.9 via 8.8.8.8 dev ens18 proto static 
172.21.10.0/27 dev ens19 proto kernel scope link src 172.21.10.1 
172.21.20.0/27 dev ens20 proto kernel scope link src 172.21.20.1 
172.21.30.0/27 dev ens21 proto kernel scope link src 172.21.30.1 
172.21.50.0/27 dev ens23 proto kernel scope link src 172.21.50.1 
172.22.10.0/27 via 172.30.0.0 dev ens22 proto static 
172.22.20.0/27 via 172.30.0.0 dev ens22 proto static 
172.30.0.1 via 172.30.0.0 dev ens22 proto static 

user@prd1fwl100:~$ ip route show table local
broadcast 10.10.1.0 dev tun0 proto kernel scope link src 10.10.1.1 
local 10.10.1.1 dev tun0 proto kernel scope host src 10.10.1.1 
broadcast 10.10.1.7 dev tun0 proto kernel scope link src 10.10.1.1 
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 
local 9.9.9.9 dev ens18 proto kernel scope host src 9.9.9.9
broadcast 172.21.10.0 dev ens19 proto kernel scope link src 172.21.10.1 
local 172.21.10.1 dev ens19 proto kernel scope host src 172.21.10.1 
broadcast 172.21.10.31 dev ens19 proto kernel scope link src 172.21.10.1 
broadcast 172.21.20.0 dev ens20 proto kernel scope link src 172.21.20.1 
local 172.21.20.1 dev ens20 proto kernel scope host src 172.21.20.1 
broadcast 172.21.20.31 dev ens20 proto kernel scope link src 172.21.20.1 
broadcast 172.21.30.0 dev ens21 proto kernel scope link src 172.21.30.1 
local 172.21.30.1 dev ens21 proto kernel scope host src 172.21.30.1 
broadcast 172.21.30.31 dev ens21 proto kernel scope link src 172.21.30.1 
broadcast 172.21.50.0 dev ens23 proto kernel scope link src 172.21.50.1 
local 172.21.50.1 dev ens23 proto kernel scope host src 172.21.50.1 
broadcast 172.21.50.31 dev ens23 proto kernel scope link src 172.21.50.1 
local 172.30.0.0 dev ens22 proto kernel scope host src 172.30.0.0 

PRD2FWL100 网络配置:

# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        # Internet interface
        eth0:
            # Sensitive addressing information have been intentionally changed
            addresses:
            - 3.3.3.3/32
            gateway4: 3.3.3.254
            match:
              macaddress: XX:XX:XX:XX:XX:XX
            # OVH mandatory routes
            routes:
            - to: 3.3.3.3/32
              via: 3.3.3.8
            - to: 0.0.0.0/0
              via: 3.3.3.8
            nameservers:
              addresses:
                - 172.22.10.2
              search:
                - kprd2
            set-name: eth0
        # Tools interface
        eth1:
            addresses:
            - 172.22.10.1/27
            match:
                macaddress: 6a:6d:d1:0a:de:10
            nameservers:
                addresses:
                - 172.22.10.2
                search:
                - kprd2
            set-name: eth1
        # Database interface
        eth2:
            addresses:
            - 172.22.20.1/27
            match:
                macaddress: aa:89:70:41:ed:22
            set-name: eth2
        # VRack Network
        eth3:
            addresses:
            - 172.30.0.1/31
            match:
                macaddress: ZZ:ZZ:ZZ:ZZ:ZZ:ZZ
            routes:
              # Tools network at kprd1
            - to: 172.21.10.0/27
              via: 172.30.0.1
              # WebPrd network at kprd1
            - to: 172.21.20.0/27
              via: 172.30.0.1
              # WebHml network at kprd1
            - to: 172.21.30.0/27
              via: 172.30.0.1
              # WebServer network at kprd1
            - to: 172.21.50.0/27
              via: 172.30.0.1
              # VRack <-> VRack 
            - to: 172.30.0.0
              via: 172.30.0.1
            set-name: eth3

PRD2FWL100当前接口:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether FE:FE:FE:FE:FE brd ff:ff:ff:ff:ff:ff
    inet 7.7.7.7/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe92:ec0/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 6a:6d:d1:0a:de:10 brd ff:ff:ff:ff:ff:ff
    inet 172.22.10.1/27 brd 172.22.10.31 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::686d:d1ff:fe0a:de10/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether aa:89:70:41:ed:22 brd ff:ff:ff:ff:ff:ff
    inet 172.22.20.1/27 brd 172.22.20.31 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a889:70ff:fe41:ed22/64 scope link 
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether d6:9f:c5:e4:93:9d brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.1/31 scope global eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::d49f:c5ff:fee4:939d/64 scope link 
       valid_lft forever preferred_lft forever
6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100
    link/none 
    inet 10.10.2.1/29 brd 10.10.2.7 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::d63:c98b:2e1:ad3d/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

PRD2FWL100 路由表

注意:外部地址已被隐藏

user@prd2fwl100:~$ ip route
default via 144.217.125.8 dev eth0 proto static 
10.10.2.0/29 dev tun0 proto kernel scope link src 10.10.2.1 
9.9.9.9 via 8.8.8.8 dev eth0 proto static 
172.21.10.0/27 via 172.30.0.1 dev eth3 proto static 
172.21.20.0/27 via 172.30.0.1 dev eth3 proto static 
172.21.30.0/27 via 172.30.0.1 dev eth3 proto static 
172.21.50.0/27 via 172.30.0.1 dev eth3 proto static 
172.22.10.0/27 dev eth1 proto kernel scope link src 172.22.10.1 
172.22.20.0/27 dev eth2 proto kernel scope link src 172.22.20.1 
172.30.0.0 via 172.30.0.1 dev eth3 proto static 

user@prd2fwl100:~$ ip route show table local
broadcast 10.10.2.0 dev tun0 proto kernel scope link src 10.10.2.1 
local 10.10.2.1 dev tun0 proto kernel scope host src 10.10.2.1 
broadcast 10.10.2.7 dev tun0 proto kernel scope link src 10.10.2.1 
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 
local 8.8.8.8 dev eth0 proto kernel scope host src 8.8.8.8 
broadcast 172.22.10.0 dev eth1 proto kernel scope link src 172.22.10.1 
local 172.22.10.1 dev eth1 proto kernel scope host src 172.22.10.1 
broadcast 172.22.10.31 dev eth1 proto kernel scope link src 172.22.10.1 
broadcast 172.22.20.0 dev eth2 proto kernel scope link src 172.22.20.1 
local 172.22.20.1 dev eth2 proto kernel scope host src 172.22.20.1 
broadcast 172.22.20.31 dev eth2 proto kernel scope link src 172.22.20.1 
local 172.30.0.1 dev eth3 proto kernel scope host src 172.30.0.1 

PRD1FRM206 网络配置:

# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eth0:
            addresses:
            - 172.21.10.7/27
            gateway4: 172.21.10.1
            match:
                macaddress: ca:7a:03:34:a0:43
            nameservers:
                addresses:
                - 172.21.10.2
                search:
                - kprd1
            set-name: eth0

PRD2FRM201 网络配置:

PRD2FRM201 是一个 LXC 主机,在 ProxMox 具有以下配置:

  • IP 172.22.10.2/27
  • 网关 172.22.10.1
  • 网桥 vmbr1

通讯测试:

从 PRD2FWL100,我可以 ping PRD1FRM206 之前的所有跃点:

user@prd2fwl100:~$ ping 172.30.0.0 -c1
PING 172.30.0.0 (172.30.0.0) 56(84) bytes of data.
64 bytes from 172.30.0.0: icmp_seq=1 ttl=64 time=0.671 ms

--- 172.30.0.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms

user@prd2fwl100:~$ ping 172.21.10.1 -c1
PING 172.21.10.1 (172.21.10.1) 56(84) bytes of data.
64 bytes from 172.21.10.1: icmp_seq=1 ttl=64 time=0.822 ms

--- 172.21.10.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.822/0.822/0.822/0.000 ms

但我无法 ping 或 arping PRD1FRM206:

user@prd2fwl100:~$ ping 172.21.10.7 -c1
PING 172.21.10.7 (172.21.10.7) 56(84) bytes of data.
From 172.30.0.1 icmp_seq=1 Destination Host Unreachable

--- 172.21.10.7 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

user@prd2fwl100:~$ arping 172.21.10.7 -c1
ARPING 172.21.10.7 from 172.30.0.1 eth3
Sent 1 probes (1 broadcast(s))
Received 0 response(s)

接下来,我将尝试 ping 从 PRD2FRM201 到 PRD1FRM206 的所有 IP:

user@PRD2FRM201:~$ sudo ping 172.22.10.1 -c1
PING 172.22.10.1 (172.22.10.1) 56(84) bytes of data.
64 bytes from 172.22.10.1: icmp_seq=1 ttl=64 time=0.134 ms

--- 172.22.10.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms

user@PRD2FRM201:~$ sudo ping 172.30.0.1 -c1
PING 172.30.0.1 (172.30.0.1) 56(84) bytes of data.
64 bytes from 172.30.0.1: icmp_seq=1 ttl=64 time=0.159 ms

--- 172.30.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms

同样,有一个地方我不能更进一步:

user@PRD2FRM201:~$ sudo ping 172.30.0.0 -c1
PING 172.30.0.0 (172.30.0.0) 56(84) bytes of data.

--- 172.30.0.0 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

user@PRD2FRM201:~$ sudo arping 172.30.0.0 -c1
ARPING 172.30.0.0 from 172.22.10.2 eth0
Sent 1 probes (1 broadcast(s))
Received 0 response(s)

我必须做什么才能解决这个问题?

routing networking arp proxmox ovh
  • 1 个回答
  • 211 Views
Martin Hope
Lenne
Asked: 2021-12-15 06:53:10 +0800 CST

windows server 可以从挂载的iso升级吗?

  • 0

我有一个托管在数据中心的 Windows 2012 服务器原始金属专用服务器。(OVH)

看来我可以挂载 ISO 以将其视为带有驱动器号的 CD,但如果我挂载服务器 2016 或 2019 安装介质,如果我为此开始升级,我是否有风险在升级期间重新启动服务器,然后媒体将不可用,因此升级在版本之间处于不确定状态?

还是应该从我的 PC 将映像挂载到 java KVM 中?即使我有 1G 光纤,网络流量也只能显示为 3Mbps,因此启动屏幕需要很长时间才能进入第一个选项屏幕。

如果可能的话,另一个选择是将 iso 的内容复制到一个文件夹中并从那里运行安装。是吗?

upgrade windows-server-2012-r2 ovh
  • 1 个回答
  • 270 Views
Martin Hope
Gilberto Martins
Asked: 2021-11-19 08:23:45 +0800 CST

在 OVH 中,如何使用 vrack 连接 2 个 VM,每个 VM 在 ProxMoxServer 中?

  • 0

根据此图,两个 PVE 中的每一个都有 1 个用于防火墙的 VM 和几个其他 VM,它们组织在子网中,使用 RFC1918 进行寻址

为了更好地理解,这是网络寻址:

PVE01 - Net 01 - 172.1.10.0/27
PVE01 - Net 02 - 172.1.20.0/27
PVE01 - Net 03 - 172.1.30.0/27

PVE02 - Net 01 - 172.2.10.0/27
PVE02 - Net 02 - 172.2.20.0/27
PVE02 - Net 03 - 172.2.30.0/27

实际上,结构中的任何服务器都能够与任何其他服务器通信进入同一个 PVE。目标是让服务器 A 的任何 VM 与服务器 B 的任何 VM 通信,反之亦然。两个 PVE 已经连接到 OVH Web Manager 中的同一个 VRack(这是我按照 OVH 文档可以做的最好的)

我希望两个防火墙都通过 VRack 进行通信。有人做过这样的配置吗?如果是这样,是否有任何文档可以帮助我了解如何配置这两个接口?

iptables proxmox ovh
  • 1 个回答
  • 146 Views
Martin Hope
Stargateur
Asked: 2021-07-13 20:42:50 +0800 CST

systemd 网络 ipv6 网关未设置

  • 3

交叉发布在github上

我不明白为什么我的 systemd 网络配置没有设置 ipv6 网关路由,XXX.XXX.XXX.XXX并且YYYY:YYYY:YYYY:YYYY::总是相同的 ip。我的服务器由 OVH 托管:

[Match]
Name=eth0
 
[Network]
DHCP=false
 
DNS=91.121.161.184
DNS=91.121.164.227
 
Address=XXX.XXX.XXX.XXX/24
Gateway=XXX.XXX.XXX.254
 
DNS=2001:41d0:1:e2b8::1
DNS=2001:41d0:1:e5e3::1
 
Address=YYYY:YYYY:YYYY:YYYY::/64
Gateway=YYYY:YYYY:YYYY:YYFF:FF:FF:FF:FF

这适用于 ivp4:

networkctl status eth0
● 2: eth0
                     Link File: n/a
                  Network File: /etc/systemd/network/eth0.network
                          Type: ether
                         State: routable (configuring)
                  Online state: online
                        Vendor: Intel Corporation
                         Model: Ethernet Controller 10G X550T
                    HW Address: xx:xx:xx:xx:xx:xx (ASRock Incorporation)
                           MTU: 1500 (min: 68, max: 9710)
                         QDisc: mq
  IPv6 Address Generation Mode: eui64
          Queue Length (Tx/Rx): 64/64
              Auto negotiation: yes
                         Speed: 10Gbps
                        Duplex: full
                          Port: tp
                       Address: XXX.XXX.XXX.XXX
                                YYYY:YYYY:YYYY:YYYY::
                                fe80::d250:99ff:fed9:a09d
                       Gateway: XXX.XXX.XXX.254
                           DNS: 91.121.161.184
                                91.121.164.227
                                2001:41d0:1:e2b8::1
                                2001:41d0:1:e5e3::1
             Activation Policy: up
           Required For Online: yes
             DHCP6 Client DUID: DUID-EN/Vendor:0000000000000000000000000000
 
Jul 13 23:21:15 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 4s
Jul 13 23:21:19 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 8s
Jul 13 23:21:25 optomata systemd-networkd[557]: eth0: NDISC: No RA received before link confirmation timeout
Jul 13 23:21:25 optomata systemd-networkd[557]: eth0: NDISC: Invoking callback for 'timeout' event.
Jul 13 23:21:25 optomata systemd-networkd[557]: eth0: NDisc handler get timeout event
Jul 13 23:21:25 optomata systemd-networkd[557]: eth0: link_check_ready(): static routes are not configured.
Jul 13 23:21:27 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 17s
Jul 13 23:21:45 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 33s
Jul 13 23:22:19 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 1min 10s
Jul 13 23:23:29 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 2min 21s
ping google.fr
PING google.fr (142.250.201.195) 56(84) bytes of data.
64 bytes from bud02s35-in-f3.1e100.net (142.250.201.195): icmp_seq=1 ttl=111 time=15.5 ms
64 bytes from bud02s35-in-f3.1e100.net (142.250.201.195): icmp_seq=2 ttl=111 time=15.5 ms
64 bytes from bud02s35-in-f3.1e100.net (142.250.201.195): icmp_seq=3 ttl=111 time=15.5 ms
64 bytes from bud02s35-in-f3.1e100.net (142.250.201.195): icmp_seq=4 ttl=111 time=15.5 ms
64 bytes from bud02s35-in-f3.1e100.net (142.250.201.195): icmp_seq=5 ttl=111 time=15.5 ms
^C
--- google.fr ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 15.461/15.471/15.488/0.009 ms

但 ipv6 不起作用:

ping -6 google.fr
ping: connect: Network is unreachable
ip -6 route show dev eth0
YYYY:YYYY:YYYY:YYYY::/64 proto kernel metric 256 pref medium
fe80::/64 proto kernel metric 256 pref medium

但如果我手动添加路线:

ip -6 route add YYYY:YYYY:YYYY:YYFF:FF:FF:FF:FF dev eth0
networkctl status eth0
● 2: eth0
                     Link File: n/a
                  Network File: /etc/systemd/network/eth0.network
                          Type: ether
                         State: routable (configured)
                  Online state: online
                        Vendor: Intel Corporation
                         Model: Ethernet Controller 10G X550T
                    HW Address: xx:xx:xx:xx:xx:xx (ASRock Incorporation)
                           MTU: 1500 (min: 68, max: 9710)
                         QDisc: mq
  IPv6 Address Generation Mode: eui64
          Queue Length (Tx/Rx): 64/64
              Auto negotiation: yes
                         Speed: 10Gbps
                        Duplex: full
                          Port: tp
                       Address: XXX.XXX.XXX.XXX
                                YYYY:YYYY:YYYY:YYYY::
                                fe80::d250:99ff:fed9:a09d
                       Gateway: XXX.XXX.XXX.254
                                YYYY:YYYY:YYYY:YYFF:FF:FF:FF:FF
                           DNS: 91.121.161.184
                                91.121.164.227
                                2001:41d0:1:e2b8::1
                                2001:41d0:1:e5e3::1
             Activation Policy: up
           Required For Online: yes
             DHCP6 Client DUID: DUID-EN/Vendor:0000000000000000000000000000

Jul 13 23:23:29 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 2min 21s
Jul 13 23:25:51 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 4min 38s
Jul 13 23:30:30 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 9min 18s
Jul 13 23:39:49 optomata systemd-networkd[557]: eth0: NDISC: Sent Router Solicitation, next solicitation in 18min 42s
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: Remembering foreign route: dst: YYYY:YYYY:YYYY:YYff:ff:ff:ff:ff/128, src: n/a, gw: n/a, prefsrc: n/a, scope: global, table: main(254), proto: boot, type: unicast, nexthop: 0, priority: 1024
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: Configuring route: dst: n/a, src: n/a, gw: YYYY:YYYY:YYYY:YYff:ff:ff:ff:ff, prefsrc: n/a, scope: global, table: main(254), proto: static, type: unicast, nexthop: 0, priority: 1024
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: Received remembered route: dst: n/a, src: n/a, gw: YYYY:YYYY:YYYY:YYff:ff:ff:ff:ff, prefsrc: n/a, scope: global, table: main(254), proto: static, type: unicast, nexthop: 0, priority: 1024
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: Routes set
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: link_check_ready(): dhcp4:no ipv4ll:no dhcp6_addresses:no dhcp6_routes:no dhcp6_pd_addresses:no dhcp6_pd_routes:no ndisc_addresses:yes ndisc_routes:yes
Jul 13 23:47:49 optomata systemd-networkd[557]: eth0: State changed: configuring -> configured
ping -6 google.com
PING google.com(fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e)) 56 data bytes
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=1 ttl=113 time=1.39 ms
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=2 ttl=113 time=1.41 ms
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=3 ttl=113 time=1.39 ms
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=4 ttl=113 time=1.40 ms
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=5 ttl=113 time=1.42 ms
64 bytes from fra24s08-in-x0e.1e100.net (2a00:1450:4001:82b::200e): icmp_seq=6 ttl=113 time=1.40 ms
ip -6 route show dev eth0
YYYY:YYYY:YYYY:YYYY::/64 proto kernel metric 256 pref medium
YYYY:YYYY:YYYY:YYff:ff:ff:ff:ff metric 1024 pref medium
fe80::/64 proto kernel metric 256 pref medium
default via YYYY:YYYY:YYYY:YYff:ff:ff:ff:ff proto static metric 1024 pref medium

系统信息:

cat /etc/os-release
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
LOGO=archlinux
uname -a
Linux optomata 5.12.15-arch1-1 #1 SMP PREEMPT Wed, 07 Jul 2021 23:35:29 +0000 x86_64 GNU/Linux
systemctl --version
systemd 249 (249-2-arch)
+PAM +AUDIT -SELINUX -APPARMOR -IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified

journalctl -u systemd-networkd.service

networking ipv6 systemd ovh
  • 1 个回答
  • 679 Views
Martin Hope
Sean Saleh
Asked: 2021-05-29 10:14:19 +0800 CST

在 OVH 自动配置 Windows Server

  • 2

当我在 OVH 将 Windows 服务器设置为专用服务器时,我想自动运行一个脚本来配置 Windows 服务器。

我看到他们有一个Installation script (URL)] 选项,但我如何在 Windows 上使用它?

[ OVH 控制台的安装脚本选项]

windows provisioning ovh
  • 1 个回答
  • 143 Views
Martin Hope
David Le Borgne
Asked: 2021-05-13 02:47:35 +0800 CST

如果网桥没有接口,OVH Proxmox 6.4 上的网络启动失败

  • 1

我们正在使用 OVH 提供的模板在 OVH (Advance-2) 硬件上设置一个新的 Proxmox 6.4(基于 Debian 10.9)主机。

为了为 VM 和 LXC 创建一个“虚拟”桥接,我们将这些行添加到/etc/network/interfaces:

auto vmbr1
iface vmbr1 inet static
        address 10.0.1.254/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

此配置在我们所有的 Proxmox 主机上都可以正常工作,但在新机器上失败:在“启动提升网络接口”时重新启动需要 20 分钟,失败并显示“启动提升网络接口失败”,并且网桥接口未启动。

在没有 vmbr1 的情况下重新启动后,我可以看到它systemctl restart networking挂在“开始等待 vmbr1 链接启动”上

May 12 10:01:49 pve7 ifup[7300]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
May 12 10:01:49 pve7 ifup[7300]: Disabling IPv6 autoconfiguration for vmbr1
May 12 10:01:49 pve7 ifup[7300]: net.ipv6.conf.vmbr1.accept_ra = 0
May 12 10:01:49 pve7 ifup[7300]: net.ipv6.conf.vmbr1.accept_dad = 0
May 12 10:01:49 pve7 ifup[7300]: net.ipv6.conf.vmbr1.autoconf = 0
May 12 10:01:49 pve7 ifup[7300]: Starting to wait for vmbr1 link to be up at Wed May 12 10:01:49 UTC 2021
debian bridge proxmox ovh
  • 1 个回答
  • 1115 Views
Martin Hope
lucasart
Asked: 2020-08-22 21:19:06 +0800 CST

在 Unbutu 20.04 上使用 Netplan 设置默认的 valid_lft 和 preferred_lft 值

  • 1

valid_lft对and的值大于零/永远有什么影响(如果有的话)preferred_lft?我应该担心这个吗,如果是这样,如何forever在启动时自动设置它(最好使用 Netplan)?

root:~# ip a
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether aa:00:11:22:33:44 brd ff:ff:ff:ff:ff:ff
    inet 111.111.111.111/32 scope global ens3
       valid_lft 86154sec preferred_lft 86154sec
    inet 222.222.222.222/32 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa:bbb:ccc:ddd/64 scope link 
       valid_lft forever preferred_lft forever

root:~# ip addr change 111.111.111.111 dev ens3 valid_lft forever preferred_lft forever

root:~# ip a
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether aa:00:11:22:33:44 brd ff:ff:ff:ff:ff:ff
    inet 111.111.111.111/32 scope global ens3
       valid_lft forever preferred_lft forever
    inet 222.222.222.222/32 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa:bbb:ccc:ddd/64 scope link 
       valid_lft forever preferred_lft forever

我问这个问题是因为我意识到服务器默认 IP 地址在没有手动交互的情况下从切换111.111.111.111到222.222.222.222,即ifconfig -a显示为ens3:

root:~# ifconfig -a
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 222.222.222.222  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::aaa:bbb:ccc:ddd  prefixlen 64  scopeid 0x20<link>
        ether aa:00:11:22:33:44  txqueuelen 1000  (Ethernet)
        RX packets 206473  bytes 54232020 (54.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111121  bytes 19855468 (19.8 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
root:~# cat /etc/netplan/*.yaml 
network:
    version: 2
    ethernets:
        ens3:
            dhcp4: yes
            match:
                macaddress: aa:00:11:22:33:44
            mtu: 1500
            set-name: ens3
            addresses:
               - 111.111.111.111/32
               - 222.222.222.222/32
            nameservers:
                addresses:
                    - 8.8.8.8
                    - 4.4.4.4
                    - 1.1.1.1
                    - 1.0.0.1

可能valid_lft并且preferred_lft是转换的原因吗?

如果不是,如何确保主 IP 地址保留111.111.111.111在此配置中?我正在使用 Virtualmin,它偶尔会闪烁一条消息,说主 IP 地址已更改为222.222.222.222,并提议将其从 修改111.111.111.111为222.222.222.222。此时,ifconfig显示222.222.222.222如上图。

ubuntu networking netplan ovh
  • 1 个回答
  • 3331 Views
Martin Hope
Balonowy
Asked: 2020-08-18 07:21:00 +0800 CST

如何为 OpenVPN 服务器设置 OVH 防火墙?

  • 0

通过使用以下脚本,我运行了我的 OpenVPN 服务器:https ://github.com/angristan/openvpn-install

我可以连接到 VPN 网络,ping 本地和外部 IP 地址,访问 HTTP 服务器(通过使用本地和外部 IP)。

DNS 在客户端上不起作用,当我尝试 ping google.com/any-other-domain 时,它显示 IP 解析错误。当我在所有域上尝试 nslookup 时,它会重试几次并返回 dns 超时。

  • 我的外网IP:147.135.XXX.XXX
  • 我的 VPN 网络:10.8.0.0/24
  • 我的内部IP:10.8.0.1

我试过了

  • 默认和非默认 VPN 服务器端口
  • TCP 和 UDP
  • Adguard、Google 和本地托管 DNS 服务器(在 VPN 上)
  • 在 VPN 服务器上打开端口 53 UDP

到目前为止都没有工作。

然后我禁用了OVH防火墙。在那之后,DNS 开始在 VPN 客户端上工作。

那么,我应该如何配置OVH防火墙呢?我不想完全禁用它,因为我在该服务器上托管了许多其他东西。

我知道,规则从最低优先级应用到最高优先级。因此,如果规则 0 匹配,则不执行规则 1-19。

我目前的配置: 点击这里截图

隐藏端口的设置与 80 和 443 完全相同。接受已建立的 TCP 连接,接受特定端口上的连接,也接受 1194 上的 TCP/UDP。

感谢帮助。另外,如果我错过了什么,请发表评论。

firewall routing networking openvpn ovh
  • 1 个回答
  • 2486 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve