AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-218326

ZedTuX's questions

Martin Hope
ZedTuX
Asked: 2019-12-06 03:28:28 +0800 CST

Keepalived 在 Tinc VPN 网格中,选举后无法 ping VIP

  • 0

描述

配置

我有 3 个节点,使用 Tinc VPN 连接在一起,我想在其中安装 HAproxy 并拥有一个 VIP,以便 HAproxy 本身处于高可用性模式。

以下是节点详细信息:

  • 节点 1在接口vpn上的 IP 地址为10.0.0.222/32
  • 节点 2在接口vpn上的 IP 地址为10.0.0.13/32
  • 节点 3在接口vpn上的 IP 地址为10.0.0.103/32

为此,我keepalived在每台机器上都安装了。

我还启用了以下 sysctl:

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1

节点 1 具有以下/etc/keepalived/keepalived.conf文件:

global_defs {
  enable_script_security
  router_id node-1
}

vrrp_script haproxy-check {
    script "/usr/bin/killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance haproxy-vip {
    state MASTER
    priority 150
    interface vpn
    virtual_router_id 1
    advert_int 1

    virtual_ipaddress {
        10.0.0.1/32
    }

    track_script {
        haproxy-check
    }
}

节点 2 和 3 具有以下/etc/keepalived/keepalived.conf文件:

global_defs {
  enable_script_security
  router_id node-2 # Node 3 has "node-3" here.
}

vrrp_script haproxy-check {
    script "/usr/bin/killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance haproxy-vip {
    state BACKUP
    priority 100
    interface vpn
    virtual_router_id 1
    advert_int 1

    virtual_ipaddress {
        10.0.0.1/32
    }

    track_script {
        haproxy-check
    }
}

当所有节点都在运行keepalived时,节点 1 是主节点,并且 VIP10.0.0.1配置良好,其他 2 个节点 ping 它。

节点 1 日志

启动时的日志keepalived:

Dec  5 14:07:53 node-1 systemd[1]: Starting Keepalive Daemon (LVS and VRRP)...
Dec  5 14:07:53 node-1 Keepalived[5870]: Starting Keepalived v1.3.2 (12/03,2016)
Dec  5 14:07:53 node-1 systemd[1]: Started Keepalive Daemon (LVS and VRRP).
Dec  5 14:07:53 node-1 Keepalived[5870]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.
Dec  5 14:07:53 node-1 Keepalived[5870]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:07:53 node-1 Keepalived[5871]: Starting Healthcheck child process, pid=5872
Dec  5 14:07:53 node-1 Keepalived_healthcheckers[5872]: Initializing ipvs
Dec  5 14:07:53 node-1 Keepalived_healthcheckers[5872]: Registering Kernel netlink reflector
Dec  5 14:07:53 node-1 Keepalived_healthcheckers[5872]: Registering Kernel netlink command channel
Dec  5 14:07:53 node-1 Keepalived_healthcheckers[5872]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:07:53 node-1 Keepalived[5871]: Starting VRRP child process, pid=5873
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: Registering Kernel netlink reflector
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: Registering Kernel netlink command channel
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: Registering gratuitous ARP shared channel
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:07:53 node-1 Keepalived_healthcheckers[5872]: Using LinkWatch kernel netlink reflector...
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: Using LinkWatch kernel netlink reflector...
Dec  5 14:07:53 node-1 Keepalived_vrrp[5873]: VRRP_Script(haproxy-check) succeeded
Dec  5 14:07:54 node-1 Keepalived_vrrp[5873]: VRRP_Instance(haproxy-vip) Transition to MASTER STATE
Dec  5 14:07:54 node-1 Keepalived_vrrp[5873]: VRRP_Instance(haproxy-vip) Changing effective priority from 150 to 152
Dec  5 14:07:55 node-1 Keepalived_vrrp[5873]: VRRP_Instance(haproxy-vip) Entering MASTER STATE
Dec  5 14:07:57 node-1 ntpd[946]: Listen normally on 45 vpn 10.0.0.1:123

节点 1 ip addr:

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.222/24 scope global vpn
   valid_lft forever preferred_lft forever
inet 10.0.0.1/24 scope global secondary vpn
   valid_lft forever preferred_lft forever

节点 2 和 3 日志

Dec  5 14:14:32 node-2 systemd[1]: Starting Keepalive Daemon (LVS and VRRP)...
Dec  5 14:14:32 node-2 Keepalived[13745]: Starting Keepalived v1.3.2 (12/03,2016)
Dec  5 14:14:32 node-2 Keepalived[13745]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.
Dec  5 14:14:32 node-2 Keepalived[13745]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:14:32 node-2 Keepalived[13746]: Starting Healthcheck child process, pid=13747
Dec  5 14:14:32 node-2 Keepalived_healthcheckers[13747]: Initializing ipvs
Dec  5 14:14:32 node-2 systemd[1]: Started Keepalive Daemon (LVS and VRRP).
Dec  5 14:14:32 node-2 Keepalived_healthcheckers[13747]: Registering Kernel netlink reflector
Dec  5 14:14:32 node-2 Keepalived_healthcheckers[13747]: Registering Kernel netlink command channel
Dec  5 14:14:32 node-2 Keepalived[13746]: Starting VRRP child process, pid=13748
Dec  5 14:14:32 node-2 Keepalived_healthcheckers[13747]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: Registering Kernel netlink reflector
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: Registering Kernel netlink command channel
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: Registering gratuitous ARP shared channel
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: Opening file '/etc/keepalived/keepalived.conf'.
Dec  5 14:14:32 node-2 Keepalived_healthcheckers[13747]: Using LinkWatch kernel netlink reflector...
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: Using LinkWatch kernel netlink reflector...
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: VRRP_Instance(haproxy-vip) Entering BACKUP STATE
Dec  5 14:14:32 node-2 Keepalived_vrrp[13748]: VRRP_Script(haproxy-check) succeeded
Dec  5 14:14:33 node-2 Keepalived_vrrp[13748]: VRRP_Instance(haproxy-vip) Changing effective priority from 100 to 102

节点 2 和 3 ip addr:

节点 2

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.13/24 scope global vpn
   valid_lft forever preferred_lft forever

节点 3

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.103/24 scope global vpn
   valid_lft forever preferred_lft forever

问题

但是,当我停keepalived在节点 1 上时,节点 3 被选为主节点,并注册了 VIP,只有节点 3 ping 10.0.0.1。

节点 1 日志

停止时:

Dec  5 14:15:26 node-1 systemd[1]: Stopping Keepalive Daemon (LVS and VRRP)...
Dec  5 14:15:26 node-1 Keepalived[5871]: Stopping
Dec  5 14:15:26 node-1 Keepalived_healthcheckers[5872]: Stopped
Dec  5 14:15:26 node-1 Keepalived_vrrp[5873]: VRRP_Instance(haproxy-vip) sent 0 priority
Dec  5 14:15:27 node-1 Keepalived_vrrp[5873]: Stopped
Dec  5 14:15:27 node-1 Keepalived[5871]: Stopped Keepalived v1.3.2 (12/03,2016)
Dec  5 14:15:27 node-1 systemd[1]: Stopped Keepalive Daemon (LVS and VRRP).
Dec  5 14:15:28 node-1 ntpd[946]: Deleting interface #45 vpn, 10.0.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=451 secs

节点 1 ip addr:

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.222/24 scope global vpn
   valid_lft forever preferred_lft forever

节点 2 日志

Dec  5 14:15:27 node-2 Keepalived_vrrp[13748]: VRRP_Instance(haproxy-vip) Transition to MASTER STATE
Dec  5 14:15:27 node-2 Keepalived_vrrp[13748]: VRRP_Instance(haproxy-vip) Received advert with higher priority 102, ours 102
Dec  5 14:15:27 node-2 Keepalived_vrrp[13748]: VRRP_Instance(haproxy-vip) Entering BACKUP STATE

节点 2 ip addr:

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.13/24 scope global vpn
   valid_lft forever preferred_lft forever

节点 3 日志

Dec  5 14:15:27 node-3 Keepalived_vrrp[31252]: VRRP_Instance(haproxy-vip) Transition to MASTER STATE
Dec  5 14:15:27 node-3 Keepalived_vrrp[31252]: VRRP_Instance(haproxy-vip) Received advert with lower priority 102, ours 102, forcing new election
Dec  5 14:15:28 node-3 Keepalived_vrrp[31252]: VRRP_Instance(haproxy-vip) Entering MASTER STATE
Dec  5 14:15:29 node-3 ntpd[27734]: Listen normally on 36 vpn 10.0.0.1:123

节点 3 ip addr:

vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.0.0.103/24 scope global vpn
   valid_lft forever preferred_lft forever
inet 10.0.0.1/24 scope global secondary vpn
   valid_lft forever preferred_lft forever

更多细节

跟踪路由

我用来traceroute尝试获取有关该问题的更多信息。

当所有节点都在运行keepalived并且 ping VIP 无处不在时,traceroute显示所有节点:

$ traceroute 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
 1  10.0.0.1 (10.0.0.1)  0.094 ms  0.030 ms  0.019 ms

当keepalived节点1上停止时,节点3选举了,节点1无法弄清楚VIP在哪里:

$ traceroute 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
 1  * * *
 2  * * *
 ...
 29  * * *
 30  * * *

节点 2 期望节点 1 拥有 VIP:

$ traceroute 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
 1  10.0.0.222 (10.0.0.222)  0.791 ms  0.962 ms  1.080 ms
 2  * * *
 3  * * *
 ...

并且节点 3 有 VIP,所以它可以工作。

Tinc 设备类型

我阅读了一些邮件存档,建议DeviceType = tap在 Tinc 配置中使用 以便传输 ARP 包(据我了解),但它没有帮助。

实际上,随着选举的发生,我不确定 Tinc 是根本原因。

尝试不使用 Tinc

我更改了keepalived配置,使其使用公共互联网接口,使用单播。

我在每个节点上的每个 keepalived 配置中添加了以下块(这里是 for node-1):

    unicast_src_ip XXX.XXX.XXX.XXX # node's public IP address
    unicast_peer {
        XXX.XXX.XXX.XXX # other node's public IP address
        XXX.XXX.XXX.XXX # other node's public IP address
    }

但是行为与上面描述的完全一样,所以 Tinc 不应该是相关的。

要求

谁能帮我找出问题所在并解决这个问题,以便在进行新的选举时,节点可以在新位置找到 VIP?

keepalived
  • 1 个回答
  • 619 Views
Martin Hope
ZedTuX
Asked: 2019-11-22 21:57:21 +0800 CST

root 运行命令,但强制命令以另一个用户身份写入数据

  • 0

为了让这个问题更清楚,让我解释一下我的用例。

我有一个与用户一起运行的 MySQL 数据库(所以是一个应用程序)mysql,并且该数据库有 2 个root用户:

  • 一个可从 tcp 连接访问,但没有权限
  • 一个可从本地套接字访问,并拥有所有权限

为了备份数据,我正在运行一个命令,root以便它可以使用套接字连接并执行查询,因此创建的数据属于root我不想要的,因为我需要访问(读/写)它作为mysql用户。

有没有办法以用户身份执行命令,但强制/更改uid写入磁盘上的文件的权限?

linux
  • 3 个回答
  • 100 Views
Martin Hope
ZedTuX
Asked: 2015-05-29 09:28:27 +0800 CST

s6:如何让运行脚本只运行一次?

  • 2

我正在使用 s6 ( http://skarnet.org/software/s6/ ) 来监督多个进程。

我的/etc/s6/文件夹中有几个服务,其中一个只需start要从 init.d 脚本调用操作。

到目前为止,脚本已经很好地启动了,但是它试图一次又一次地重新启动它。

有没有办法避免这种情况?

linux
  • 2 个回答
  • 2273 Views
Martin Hope
ZedTuX
Asked: 2014-12-31 12:13:33 +0800 CST

为解析服务器 IP 的所有服务器名称提供 Nginx SSL 证书

  • 4

鉴于我在 DNS 中配置了 2 个子域(因此使用我的服务器的 IP 地址对这两个子域进行 ping 操作)并且对于这些子域,我有 2 个不同的 TLS 证书。

我已经以这种方式配置了 nginx:

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}

# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  ''      '';
}

gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

access_log /var/log/nginx.log;
error_log /var/log/nginx_errors.log;

# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;

server {
  listen 80 default_server;
  server_name _; # This is just an invalid value which will never trigger on a real hostname.
  return 503;
  server_tokens off; # Hide the nginx version
}


upstream sub1.domain.tld {
  server 172.17.0.27:5000;
}

server {
  server_name sub1.domain.tld;
  server_tokens off; # Hide the nginx version

  listen 443 ssl;
  ssl_certificate /etc/nginx/ssl/sub1.domain.tld.crt;
  ssl_certificate_key /etc/nginx/ssl/sub1.domain.tld.key;

  location / {
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpasswd/sub1.htpasswd;
    proxy_pass http://sub1.domain.tld;
  }
}

在这一点上,如果我全力以赴,https://sub1.domain.tld一切正常。现在,如果我尝试访问https://sub2.domain.tld尚未配置的访问,因此不应该回复它接受连接并向我显示证书的问题,因为它与服务器名称不匹配,所以似乎使用此配置,Nginx 发送证书对 443 端口的所有请求。

https://sub2.domain.tld在我通过添加新server指令对其进行配置之前,我应该如何更改我的配置以使访问失败(例如出现 503 错误)?

nginx
  • 2 个回答
  • 3087 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve