AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-380022

CodeNinja's questions

Martin Hope
CodeNinja
Asked: 2021-01-07 07:12:28 +0800 CST

如何解决 Open Xchange `/opt/open-xchange/sbin/registerserver` 命令给出`查找失败。服务“OXUtil_V2”不可用。`错误

  • 1

我是 Open Xchange 的新手,并尝试按照官方安装说明将其安装在干净的 Debian 10.7 VM 上。

我在同一台服务器上安装了 MariaDB,为 root 用户提供了密码(root_db_pass#example),创建了一个新数据库(open_xchange)和一个用户(open_xchange_user),并使用了密码(ox_db_pass#example)。

遵循安装说明中的所有步骤都可以正常工作,直到我需要在 Open-Xchange configdb 数据库(/opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P admin_master_password)中注册本地服务器,然后我收到错误

root@axx-oxch-srv01:~# /opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P admin_master_password

server could not be registered:
Server response:
 Look-up failed. Service "OXUtil_V2" is not available.
        at sun.rmi.registry.RegistryImpl.lookup(RegistryImpl.java:227)
        at sun.rmi.registry.RegistryImpl_Skel.dispatch(RegistryImpl_Skel.java:133)
        at sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:469)
        at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:301)
        at sun.rmi.transport.Transport$1.run(Transport.java:200)
        at sun.rmi.transport.Transport$1.run(Transport.java:197)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
        at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
        at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:303)
        at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:279)
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:380)
        at sun.rmi.registry.RegistryImpl_Stub.lookup(RegistryImpl_Stub.java:123)
        at java.rmi.Naming.lookup(Naming.java:101)
        at com.openexchange.admin.console.util.server.RegisterServer.<init>(RegisterServer.java:77)
        at com.openexchange.admin.console.util.server.RegisterServer.main(RegisterServer.java:91)


我用谷歌搜索了这个问题,但几乎找不到任何关于此的内容。我检查了密码 3 次以确保我没有输入错误!我怎样才能解决这个问题 ?


可能需要一些额外的信息来帮助我

使用的initconfigdb命令

/opt/open-xchange/sbin/initconfigdb --configdb-dbname open_xchange --configdb-user open_xchange_user --configdb-pass ox_db_pass --mysql-root-passwd root_db_pass 

使用的oxinstaller命令

/opt/open-xchange/sbin/oxinstaller --no-license --servername=oxserver --configdb-pass=ox_db_pass --master-pass=admin_master_password --network-listener-host=localhost --servermemory 1024

如果需要更多数据,请告诉我,我会更新我的问题...


更新 1

我刚刚发现还记录了与我在控制台中收到的消息不同的内容。i 之后会弹出下面的日志条目,service open-xchange restart这是执行后所需的/opt/open-xchange/sbin/oxinstaller --no-license --servername=oxserver

日志:

2021-01-07T01:56:55,546-0600 INFO  [pool-8-thread-1] com.openexchange.startup.impl.osgi.DBMigrationMonitorTracker$1.run(DBMigrationMonitorTracker$1.java:132)


        Open-Xchange Server v7.10.4-Rev14 initialized. The server should be up and running...

2021-01-07T01:57:01,425-0600 INFO  [OXTimer-0000006] com.openexchange.push.impl.balancing.reschedulerpolicy.PermanentListenerRescheduler.cancelTimerTask(PermanentListenerRescheduler.java:207)
Canceled timer task for rescheduling checks
2021-01-07T01:57:01,439-0600 WARN  [OXTimer-0000006] com.openexchange.database.internal.ReplicationMonitor.checkActualAndFallback(ReplicationMonitor.java:181)
DBP-0001 Categories=SERVICE_DOWN Message='Cannot get connection to config DB.' exceptionID=808265395-5
com.openexchange.exception.OXException: DBP-0001 Categories=SERVICE_DOWN Message='Cannot get connection to config DB.' exceptionID=808265395-5
        at com.openexchange.database.DBPoolingExceptionCodes.create(DBPoolingExceptionCodes.java:260)
        at com.openexchange.database.internal.ReplicationMonitor.createException(ReplicationMonitor.java:245)
        at com.openexchange.database.internal.ReplicationMonitor.checkActualAndFallback(ReplicationMonitor.java:174)
        at com.openexchange.database.internal.ReplicationMonitor.checkActualAndFallback(ReplicationMonitor.java:157)
        at com.openexchange.database.internal.ConfigDatabaseServiceImpl.get(ConfigDatabaseServiceImpl.java:138)
        at com.openexchange.database.internal.ConfigDatabaseServiceImpl.get(ConfigDatabaseServiceImpl.java:133)
        at com.openexchange.database.internal.ConfigDatabaseServiceImpl.getReadOnly(ConfigDatabaseServiceImpl.java:173)
        at com.openexchange.database.internal.DatabaseServiceImpl.getReadOnly(DatabaseServiceImpl.java:152)
        at com.openexchange.push.impl.PushDbUtils.getContextsWithPushRegistrations(PushDbUtils.java:296)
        at com.openexchange.push.impl.PushDbUtils.getPushRegistrations(PushDbUtils.java:256)
        at com.openexchange.push.impl.PushManagerRegistry.getUsersWithPermanentListeners(PushManagerRegistry.java:834)
        at com.openexchange.push.impl.balancing.reschedulerpolicy.PermanentListenerRescheduler.reschedule(PermanentListenerRescheduler.java:364)
        at com.openexchange.push.impl.balancing.reschedulerpolicy.PermanentListenerRescheduler.doReschedule(PermanentListenerRescheduler.java:341)
        at com.openexchange.push.impl.balancing.reschedulerpolicy.PermanentListenerRescheduler.checkReschedule(PermanentListenerRescheduler.java:327)
        at com.openexchange.push.impl.balancing.reschedulerpolicy.PermanentListenerRescheduler$1.run(PermanentListenerRescheduler.java:284)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at com.openexchange.threadpool.internal.CustomThreadPoolExecutor$ScheduledFutureTask.runPeriodic(CustomThreadPoolExecutor.java:1016)
        at com.openexchange.threadpool.internal.CustomThreadPoolExecutor$ScheduledFutureTask.run(CustomThreadPoolExecutor.java:1041)
        at com.openexchange.threadpool.internal.CustomThreadPoolExecutor$Worker.runTask(CustomThreadPoolExecutor.java:834)
        at com.openexchange.threadpool.internal.CustomThreadPoolExecutor$Worker.run(CustomThreadPoolExecutor.java:861)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.openexchange.pooling.PoolingException: Cannot create pooled object.
        at com.openexchange.pooling.ReentrantLockPool.get(ReentrantLockPool.java:320)
        at com.openexchange.database.internal.AbstractMetricAwarePool.get(AbstractMetricAwarePool.java:149)
        at com.openexchange.database.internal.AbstractMetricAwarePool.get(AbstractMetricAwarePool.java:72)
        at com.openexchange.database.internal.TimeoutFetchAndSchema.get(TimeoutFetchAndSchema.java:93)
        at com.openexchange.database.internal.ReplicationMonitor.checkActualAndFallback(ReplicationMonitor.java:167)
        ... 19 common frames omitted
===>>> Caused by: java.sql.SQLException: Access denied for user 'openexchange'@'localhost' (using password: YES) <<<===
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
...
...

注意这一行:Caused by: java.sql.SQLException: Access denied for user 'openexchange'@'localhost' (using password: YES). 这很有趣,因为openexchange@localhost不是我在initconfigdb命令期间提供的用户,也与我在 MariaDB 中创建的数据库用户不对应。

我尝试使用与我创建的用户相同的密码创建此类用户并再次执行所有操作,但不幸的是这会导致相同的错误。我不知道 Open Xchange 是从哪里得到这个用户的……???

open-xchange
  • 1 个回答
  • 620 Views
Martin Hope
CodeNinja
Asked: 2020-11-10 07:08:06 +0800 CST

有没有办法让预先配置的 OpenVPN-AS 连接客户端(服务器锁定的配置文件)在反向代理后面工作?

  • 2

问题

我们在 1 个公共 IP 上托管多项服务:

  • OpenVPN 访问服务器 (vpn.ourdomain.com)
  • OpenVPN 社区版 (old-vpn.ourdomain.com)
  • Nginx 网络服务器 (subdomain.ourdomain.com)

为了使这一切都适用于我们的 1 个也是唯一的公共 IP,我们使用了 Nginx 反向代理服务器。这适用于我们手动分发证书的 Web 服务和 OpenVPN,但可以从中下载的“预配置 OpenVPN 连接客户端”Openvpn AS CWS无法与包含的证书连接。当我从user-locked profile手动下载OpenVPN AS CWS并将其插入连接客户端时,我能够连接。

我就这个问题联系了 OpenVPN,他们这样回答:

the Nginx reverse proxy is the reason why the connection using a server-locked (bundled) profile fails.
In your configuration, Nginx performs SSL offload and corrupt TLS verification between OpenVPN Connect client and Access Server.

You can try to publish port TCP 443 from the Access Server and stop Nginx to verify that when AS is available directly your users can connect.

在我询问是否有工作后,我得到了答复:

OpenVPN Access Server is not developed to be placed behind the reverse proxy. It should be able to handle TLS session from OpenVPN client in order to authenticate them.
If you place As behind a proxy you will loose possibility to use server-locked profiles, only user-locked and autologin profiles can be used in this case.

当我正确理解这一点时(通过一些额外的研究),Nginx 代理会处理 SSL 连接,该连接实际上应该由OpenVPN AS.


尝试过的解决方案

因为我无法想象这在技术上是不可能的,即使它没有得到 OpenVPN 的正式支持,我使用了我的朋友 Google 并尝试了以下方法:

1

我认为我可以通过将vpn.ourdomain.com加密的流量传递给OpenVPN AS并尝试non terminating TLS pass through(https://gist.github.com/kekru/c09dbab5e78bf76402966b13fa72b9d2)来“简单地”解决这个问题。我无法将它与其他服务的“正常反向代理”结合使用,所以我决定设置一个测试代理服务器来测试这个解决方案是否可以解决我的问题。

我能够non terminating TLS pass through proxy正常工作(我可以访问vpn.ourdomain.com并且根据浏览器它是不安全的连接)。不幸的是,我仍然无法连接包含证书/配置的连接客户端。当我直接暴露它时OpenVPN AS,vpn.ourdomain.com它工作正常。

2

在一个较旧的主题(https://forums.openvpn.net/viewtopic.php?t=27291)中,我读到有人遇到类似问题的内容,并注意到OpenVPN connect client连接到已解析的 IPvpn.ourdomain.com而不是域。这意味着代理永远不会将这些调用转发到正确的服务器。作为肮脏的工作,他(McSanz)将所有流量转发/RPC2到,OpenVPN AS这也可能是我们的工作,因为我们目前没有使用这种路径的应用程序。我也试过这个(有和没有1)


Nginx 配置

default reverse proxy我尝试的(使用 TLS 终止)是:

server {
  listen        443 ssl;
  server_name   vpn.ourdomain.com "RPC2" "^rpc2$";   # tried with and without "RPC2" "^rpc2$"

  ssl_certificate /etc/nginx/ssl/_.ourdomain.com/_.ourdomain.com.chained.crt;
  ssl_certificate_key /etc/nginx/ssl/_.ourdomain.com/_.ourdomain.com.key;
  ssl_client_certificate /etc/nginx/ssl/_.ourdomain.com/_.ourdomain.com.ca;
  ssl_verify_client optional;

  location / {
    proxy_pass https://10.128.20.5:443;

    # app1 reverse proxy follow
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }

  access_log /var/log/nginx/access_log.log;
  error_log /var/log/nginx/error_log.log;
}

我尝试的non terminating TLS pass through流如下所示:

stream {
   map $ssl_preread_server_name $targetBackend {
     vpn.ourdomain.com openvpnas;
     "^rpc2&" openvpnas;
   }

    upstream openvpnas {
      server 10.128.20.5:443;      # This is the OpenVPN AS IP
    }

   server {
     listen 443;

     proxy_connect_timeout 1s;
     proxy_timeout 3s;
     resolver 1.1.1.1;         # No idea what this does, i tried with, without this line and with 8.8.8.8

     proxy_pass $targetBackend;   # i tried 10.128.20.5:443 here as wel
     ssl_preread on;
  }
}

include /etc/nginx/sites-enabled/*;请注意,我需要在http标签中注释掉nginx.conf并添加/etc/nginx/streams-enabled/*(放置流配置的位置)作为根标签nginx.conf以使流工作。这意味着当我测试流功能时,反向代理不再适用于所有其他服务。如果可以解决流配置的问题,我需要使用我们托管的其他服务的默认反向代理功能并行运行它。


问题

  • 即使 OpenVPN 没有正式支持,是否可以让预配置的 OpenVPN AS 连接客户端使用包含的配置?
  • 如果可以让它工作,是否也可以使用 Nginx 作为反向代理,或者我们应该用另一个应用程序替换代理?
  • 如果 Nginx 可以作为反向代理,我做错了什么?
openvpn ssl nginx reverse-proxy
  • 1 个回答
  • 1052 Views
Martin Hope
CodeNinja
Asked: 2020-09-24 03:27:01 +0800 CST

为什么具有静态 (vpn)-IP 的 OpenVPN 客户端无法 ping 服务器而具有 DHCP (vpn)-ip 的客户端可以?

  • 0

我正在建立一个 OpenVPN 网络。这个想法是连接到 VPN 的服务器获得一个固定的 IP,而客户端(使用服务器的服务)获得 dhcp ip。

  • 服务器应该获得一个IP10.10.0.1 - 10.10.0.254
  • 客户端应该获得一个IP10.10.1.1 - 10.10.255.254

我当前的设置如下所示:

  • OpenVPN 服务器 10.10.0.1
  • 应用程序服务器 10.10.0.20 <- 通过 ccd 静态
  • 客户端 10.10.1.2 <- DHCP

我实现了我的客户获得正确范围内的 DHCP 地址。他们还能够 ping OpenVPN 服务器,反之亦然。

我还能够为服务器配置客户端特定配置,以便它们获得静态 IP,但由于某种原因,它们无法 ping OpenVPN 服务器,我也无法从 OpenVPN 服务器 ping 客户端。

有人可以帮我找出我配置错误的地方吗?

OpenVPN 配置文件

OpenVPN 服务器配置:

port 3194
proto udp
dev tun
mode server

ca server_cert/ca.crt
cert server_cert/ovpn-server.crt
key server_cert/ovpn-server.key  # This file should be kept secret
dh server_cert/dh.pem

tls-server
cipher AES-256-CBC

ifconfig 10.10.0.1 255.255.0.0
ifconfig-pool 10.10.1.1 10.10.255.254

route 10.10.0.0 255.255.0.0
push "route-gateway 10.10.0.1 255.255.0.0"
push "route 10.10.0.0 255.255.0.0"

ifconfig-pool-persist ipp.txt

client-config-dir ccd
client-to-client
duplicate-cn

keepalive 10 120

persist-key
persist-tun

status openvpn-status.log
log-append  /var/log/openvpn.log
verb 6
explicit-exit-notify 1

我的应用程序服务器的客户端特定配置:

ifconfig-push 10.10.0.20 10.10.0.1

我的 client.conf(在应用服务器上使用)

client
dev tun
proto udp
port 3194
remote vpn.domain.com 3194
nobind
cipher AES-256-CBC

ca keys/ca.crt
cert /etc/openvpn/keys/ngin-web01.crt
key /etc/openvpn/keys/ngin-web01.key

log-append /var/log/openvpn.log
verb 6

路线

OpenVPN 服务器 (10.10.0.1):

root@ovpn-srv01:/home/axxmin# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.128.20.1     0.0.0.0         UG    0      0        0 ens18
10.10.0.0       255.255.0.0     255.255.0.0     UG    0      0        0 tun0
10.128.20.0     0.0.0.0         255.255.255.0   U     0      0        0 ens18
255.255.0.0     0.0.0.0         255.255.255.255 UH    0      0        0 tun0

root@ovpn-srv01:/home/axxmin# routel
         target            gateway          source    proto    scope    dev tbl
     10.10.0.0/ 16     255.255.0.0                                     tun0
    255.255.0.0                          10.10.0.1   kernel     link   tun0
      10.10.0.1              local       10.10.0.1   kernel     host   tun0 local
        default        10.128.20.1                   static           ens18
   10.128.20.0/ 24                     10.128.20.6   kernel     link  ens18
    10.128.20.0          broadcast     10.128.20.6   kernel     link  ens18 local
    10.128.20.6              local     10.128.20.6   kernel     host  ens18 local
  10.128.20.255          broadcast     10.128.20.6   kernel     link  ens18 local
      127.0.0.0          broadcast       127.0.0.1   kernel     link     lo local
     127.0.0.0/ 8            local       127.0.0.1   kernel     host     lo local
      127.0.0.1              local       127.0.0.1   kernel     host     lo local
127.255.255.255          broadcast       127.0.0.1   kernel     link     lo local
            ::1                                      kernel              lo
        fe80::/ 64                                   kernel           ens18
        fe80::/ 64                                   kernel            tun0
            ::1              local                   kernel              lo local
fe80::1083:7fff:fedd:70c0              local                   kernel           ens18 local
fe80::b24c:97a4:281:de41              local                   kernel            tun0 local
        ff00::/ 8                                                     ens18 local
        ff00::/ 8                                                      tun0 local

应用服务器 (10.10.0.20)

root@ovpn-srv01:/home/axxmin# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.128.20.1     0.0.0.0         UG    0      0        0 ens18
10.10.0.0       255.255.0.0     255.255.0.0     UG    0      0        0 tun0
10.128.20.0     0.0.0.0         255.255.255.0   U     0      0        0 ens18
255.255.0.0     0.0.0.0         255.255.255.255 UH    0      0        0 tun0

root@ovpn-srv01:/home/axxmin# routel
         target            gateway          source    proto    scope    dev tbl
     10.10.0.0/ 16     255.255.0.0                                     tun0
    255.255.0.0                          10.10.0.1   kernel     link   tun0
      10.10.0.1              local       10.10.0.1   kernel     host   tun0 local
        default        10.128.20.1                   static           ens18
   10.128.20.0/ 24                     10.128.20.6   kernel     link  ens18
    10.128.20.0          broadcast     10.128.20.6   kernel     link  ens18 local
    10.128.20.6              local     10.128.20.6   kernel     host  ens18 local
  10.128.20.255          broadcast     10.128.20.6   kernel     link  ens18 local
      127.0.0.0          broadcast       127.0.0.1   kernel     link     lo local
     127.0.0.0/ 8            local       127.0.0.1   kernel     host     lo local
      127.0.0.1              local       127.0.0.1   kernel     host     lo local
127.255.255.255          broadcast       127.0.0.1   kernel     link     lo local
            ::1                                      kernel              lo
        fe80::/ 64                                   kernel           ens18
        fe80::/ 64                                   kernel            tun0
            ::1              local                   kernel              lo local
fe80::1083:7fff:fedd:70c0              local                   kernel           ens18 local
fe80::b24c:97a4:281:de41              local                   kernel            tun0 local
        ff00::/ 8                                                     ens18 local
        ff00::/ 8                                                      tun0 local

客户端(10.10.1.2):

root@client-device:/home/pi# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.128.60.1     0.0.0.0         UG    202    0        0 eth0
10.10.0.0       10.10.1.1       255.255.0.0     UG    0      0        0 tun0
10.10.1.1       0.0.0.0         255.255.255.255 UH    0      0        0 tun0
10.128.60.0     0.0.0.0         255.255.255.0   U     202    0        0 eth0

root@client-device:/home/pi# routel
         target            gateway          source    proto    scope    dev tbl
     10.10.0.0/ 16       10.10.1.1                                     tun0
      10.10.1.1                          10.10.1.2   kernel     link   tun0
      10.10.1.2              local       10.10.1.2   kernel     host   tun0 local
        default        10.128.60.1    10.128.60.33     dhcp            eth0
   10.128.60.0/ 24                    10.128.60.33     dhcp     link   eth0
    10.128.60.0          broadcast    10.128.60.33   kernel     link   eth0 local
   10.128.60.33              local    10.128.60.33   kernel     host   eth0 local
  10.128.60.255          broadcast    10.128.60.33   kernel     link   eth0 local
      127.0.0.0          broadcast       127.0.0.1   kernel     link     lo local
     127.0.0.0/ 8            local       127.0.0.1   kernel     host     lo local
      127.0.0.1              local       127.0.0.1   kernel     host     lo local
127.255.255.255          broadcast       127.0.0.1   kernel     link     lo local
            ::1                                      kernel              lo
        fe80::/ 64                                   kernel            eth0
        fe80::/ 64                                   kernel            tun0
            ::1              local                   kernel              lo local
fe80::20d4:6b14:ff16:e230              local                   kernel            tun0 local
fe80::65cf:ce3:fc9f:20fa              local                   kernel            eth0 local
        ff00::/ 8                                                      eth0 local
        ff00::/ 8                                                      tun0 local

routing openvpn ping
  • 1 个回答
  • 1098 Views
Martin Hope
CodeNinja
Asked: 2020-09-11 00:44:24 +0800 CST

为什么一个特定 IP 的 OpenVPN 路由流量错误?

  • 1

我有以下拓扑,其中每个站点的 x 不同:

[OpenVPN client] < - > [OpenVPN Access Server] < - > [pfSense router] < - > [IPSec connected sites]
 172.27.244.21          10.128.20.5                    10.128.20.1            10.130.x.1

我可以从 OpenVPN 客户端或直接从 OpenVPN 访问服务器 ping IPSec 站点中的设备。有一个站点 (10.130.7.1) 我无法从一个 OpenVPN 客户端 ping,但我可以直接从 OpenVPN 访问服务器 ping 站点。

OpenVPN (Windows) 客户端的 Ping 结果:

Pinging 10.130.2.1 with 32 bytes of data:
Reply from 10.130.2.1: bytes=32 time=160ms TTL=62
Reply from 10.130.2.1: bytes=32 time=142ms TTL=62
Reply from 10.130.2.1: bytes=32 time=126ms TTL=62
Reply from 10.130.2.1: bytes=32 time=103ms TTL=62

Pinging 10.130.17.1 with 32 bytes of data:
Reply from 10.130.17.1: bytes=32 time=46ms TTL=62
Reply from 10.130.17.1: bytes=32 time=51ms TTL=62
Reply from 10.130.17.1: bytes=32 time=55ms TTL=62
Reply from 10.130.17.1: bytes=32 time=29ms TTL=62

Pinging 10.130.7.1 with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.

来自 OpenVPN 访问服务器 (SSH) 的 Ping 结果

PING 10.130.2.1 (10.130.2.1) 56(84) bytes of data.
64 bytes from 10.130.2.1: icmp_seq=1 ttl=63 time=136 ms
64 bytes from 10.130.2.1: icmp_seq=2 ttl=63 time=111 ms
64 bytes from 10.130.2.1: icmp_seq=3 ttl=63 time=122 ms
64 bytes from 10.130.2.1: icmp_seq=4 ttl=63 time=166 ms

PING 10.130.17.1 (10.130.17.1) 56(84) bytes of data.
64 bytes from 10.130.17.1: icmp_seq=1 ttl=63 time=29.1 ms
64 bytes from 10.130.17.1: icmp_seq=2 ttl=63 time=29.1 ms
64 bytes from 10.130.17.1: icmp_seq=3 ttl=63 time=29.5 ms
64 bytes from 10.130.17.1: icmp_seq=4 ttl=63 time=29.5 ms

PING 10.130.7.1 (10.130.7.1) 56(84) bytes of data.
64 bytes from 10.130.7.1: icmp_seq=1 ttl=63 time=29.5 ms
64 bytes from 10.130.7.1: icmp_seq=2 ttl=63 time=28.8 ms
64 bytes from 10.130.7.1: icmp_seq=3 ttl=63 time=28.5 ms
64 bytes from 10.130.7.1: icmp_seq=4 ttl=63 time=28.5 ms

对我来说,请求到10.130.7.1. 为了调试这个,我从我的 OpenVPN 客户端做了一个跟踪路由:

Tracing route to 10.130.2.1 over a maximum of 30 hops
  1     1 ms     1 ms     1 ms  172.27.232.1
  2     2 ms     2 ms     1 ms  10.128.20.1
  3   115 ms   115 ms   116 ms  10.130.2.1

Tracing route to 10.130.17.1 over a maximum of 30 hops
  1     1 ms     1 ms     2 ms  172.27.232.1
  2     1 ms     1 ms     1 ms  10.128.20.1
  3    76 ms    38 ms    42 ms  10.130.17.1

Tracing route to 10.130.7.1 over a maximum of 30 hops
  1     1 ms     2 ms     2 ms  172.27.232.1
  2     *        *        *     Request timed out.
  3     *        *        *     Request timed out.

由于请求似乎发往 OpenVPN 访问服务器 (172.27.253.1),我tcpdump在从 Windows 客户端 ping 时做了一个:

10:27:53.900720  In ethertype IPv4 (0x0800), length 76: 172.27.244.21 > 10.130.2.1: ICMP echo request, id 1, seq 1036, length 40
10:27:53.900756 Out 6a:fd:3e:82:c5:b8 ethertype IPv4 (0x0800), length 76: 10.128.20.5 > 10.130.2.1: ICMP echo request, id 1, seq 1036, length 40
10:27:54.001502  In 00:25:90:bd:8a:4a ethertype IPv4 (0x0800), length 76: 10.130.2.1 > 10.128.20.5: ICMP echo reply, id 1, seq 1036, length 40
10:27:54.001531 Out ethertype IPv4 (0x0800), length 76: 10.130.2.1 > 172.27.244.21: ICMP echo reply, id 1, seq 1036, length 40

10:27:57.048858  In ethertype IPv4 (0x0800), length 76: 172.27.244.21 > 10.130.17.1: ICMP echo request, id 1, seq 1037, length 40
10:27:57.048909 Out 6a:fd:3e:82:c5:b8 ethertype IPv4 (0x0800), length 76: 10.128.20.5 > 10.130.17.1: ICMP echo request, id 1, seq 1037, length 40
10:27:57.077173  In 00:25:90:bd:8a:4a ethertype IPv4 (0x0800), length 76: 10.130.17.1 > 10.128.20.5: ICMP echo reply, id 1, seq 1037, length 40
10:27:57.077204 Out ethertype IPv4 (0x0800), length 76: 10.130.17.1 > 172.27.244.21: ICMP echo reply, id 1, seq 1037, length 40

10:27:59.502909  In ethertype IPv4 (0x0800), length 76: 172.27.244.21 > 10.130.7.1: ICMP echo request, id 1, seq 1038, length 40
10:27:59.502966 Out 6a:fd:3e:82:c5:b8 ethertype IPv4 (0x0800), length 76: 172.27.244.21 > 10.130.7.1: ICMP echo request, id 1, seq 1038, length 40

哈!,请求通过(ping 请求来自的 OpenVPN 客户端地址)10.130.7.1从服务器“发出” 。为什么会这样?为什么它不像对和的请求一样通过(OpenVPN 访问服务器 IP)发出?172.27.244.2110.128.20.510.130.2.110.130.17.1

我不知道它是否需要,但只是为了确定访问服务器的路由表

root@axx-ovpn-as01:/home/axxmin# routel
         target            gateway          source    proto    scope    dev tbl
        default        10.128.20.1                   static           ens18
   10.128.20.0/ 24                     10.128.20.5   kernel     link  ens18
  172.27.224.0/ 21                    172.27.224.1   kernel     link  as0t0
  172.27.232.0/ 21                    172.27.232.1   kernel     link  as0t1
  172.27.244.21                                      static           as0t1
    10.128.20.0          broadcast     10.128.20.5   kernel     link  ens18 local
    10.128.20.5              local     10.128.20.5   kernel     host  ens18 local
  10.128.20.255          broadcast     10.128.20.5   kernel     link  ens18 local
      127.0.0.0          broadcast       127.0.0.1   kernel     link     lo local
     127.0.0.0/ 8            local       127.0.0.1   kernel     host     lo local
      127.0.0.1              local       127.0.0.1   kernel     host     lo local
127.255.255.255          broadcast       127.0.0.1   kernel     link     lo local
   172.27.224.0          broadcast    172.27.224.1   kernel     link  as0t0 local
   172.27.224.1              local    172.27.224.1   kernel     host  as0t0 local
 172.27.231.255          broadcast    172.27.224.1   kernel     link  as0t0 local
   172.27.232.0          broadcast    172.27.232.1   kernel     link  as0t1 local
   172.27.232.1              local    172.27.232.1   kernel     host  as0t1 local
 172.27.239.255          broadcast    172.27.232.1   kernel     link  as0t1 local
            ::1              local                   kernel              lo
        fe80::/ 64                                   kernel           ens18
        fe80::/ 64                                   kernel           as0t0
        fe80::/ 64                                   kernel           as0t1
            ::1              local                   kernel              lo local
fe80::1cea:a857:88ab:b687              local                   kernel           as0t1 local
fe80::68fd:3eff:fe82:c5b8              local                   kernel           ens18 local
fe80::a3cb:f651:4066:8cb              local                   kernel           as0t0 local
        ff00::/ 8                                                     ens18 local
        ff00::/ 8                                                     as0t0 local
        ff00::/ 8                                                     as0t1 local
vpn routing networking openvpn site-to-site-vpn
  • 1 个回答
  • 176 Views
Martin Hope
CodeNinja
Asked: 2020-08-13 03:55:34 +0800 CST

为什么我增加 ZFS 共享的配额时可用空间没有增加?

  • 2

我对 ZFS 不是很熟悉,需要在 FreeNAS 上增加 ZFS 共享的大小。当我这样做时,zpool list我看到我们有 2 个 ZFS 池:

NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Volume1       1.98T  1.00T  1003G         -    26%    50%  1.00x  ONLINE  /mnt
Volume2       1.98T   140G  1.85T         -     2%     6%  1.00x  ONLINE  /mnt

我想增加大小的共享是一个Volume1名为的目录releases(应该从 100G 到 150G)

[root@axxfile] ~# zfs list | grep releases
Volume1/releases                                           100G   280K   100G  /mnt/Volume1/releases

[root@axxfile] ~# zfs get quota Volume1/releases
NAME              PROPERTY  VALUE  SOURCE
Volume1/releases  quota     100G   local

为了增加我所做的大小:zfs set quota=150 Volume1/releases这导致:

[root@axxfile] ~# zfs list | grep releases
Volume1/releases                                           100G   280K   100G  /mnt/Volume1/releases

[root@axxfile] ~# zfs get quota Volume1/releases
NAME              PROPERTY  VALUE  SOURCE
Volume1/releases  quota     150G   local

由于某种原因,配额从 100G 增加到 150G,但“可用空间”仍然是 100G。在向谷歌询问解决方案后,我发现我增加了 ZFS 共享,但操作系统不知道这一点,所以我需要用类似的东西告诉操作系统

[root@axxfile] ~# growfs -M /mnt/Volume1/releases/ Volume1/releases
growfs: illegal option -- M
usage: growfs [-Ny] [-s size] special | filesystem

如您所见,这不起作用,因为-M它不是有效的属性。我尝试继续谷歌搜索,但无法找到解决方案。也许有人可以通过解释我做错了什么或我错过了哪一步来帮助我?

也许很高兴知道我们使用的是旧版本(9.3)的 FreeNAS。计划在不久的将来进行更新,但我们还无法做到。

============== 更新1 ============ @Michael Hampton

我注意到 refquota 仍然是 100G,我猜这就是问题所在?

[root@axxfile] ~# zfs get quota,reservation,refquota,refreservation Volume1/releases
NAME              PROPERTY        VALUE      SOURCE
Volume1/releases  quota           150G       local
Volume1/releases  reservation     none       local
Volume1/releases  refquota        100G       local
Volume1/releases  refreservation  none       local

[root@axxfile] ~# zfs get -r reservation,refreservation -t filesystem,volume Volume1
cannot open '-t': dataset does not exist
cannot open 'filesystem,volume': invalid dataset name
NAME                                              PROPERTY        VALUE      SOURCE
Volume1                                           reservation     none       local
Volume1                                           refreservation  none       local
Volume1/VM                                        reservation     none       local
Volume1/VM                                        refreservation  none       local
Volume1/ab                                        reservation     none       local
Volume1/ab                                        refreservation  none       local
Volume1/backup                                    reservation     none       default
Volume1/backup                                    refreservation  none       default
Volume1/backup/cloneimages                        reservation     none       local
Volume1/backup/cloneimages                        refreservation  none       local
Volume1/backup/sicherungen                        reservation     none       local
Volume1/backup/sicherungen                        refreservation  none       local
Volume1/backup/switch                             reservation     none       default
Volume1/backup/switch                             refreservation  none       default
Volume1/jails                                     reservation     none       default
Volume1/jails                                     refreservation  none       default
Volume1/mailserver                                reservation     none       local
Volume1/mailserver                                refreservation  none       local
Volume1/releases                                  reservation     none       local
Volume1/releases                                  refreservation  none       local
quota zfs truenas zpool
  • 1 个回答
  • 808 Views
Martin Hope
CodeNinja
Asked: 2020-08-06 23:30:50 +0800 CST

为什么返回 apache 一个 http 500 但不记录任何错误?

  • 1

在 12 年的(网络)服务器上仍然托管 1 个旧的(自定义)PHP 应用程序。我们要关闭此服务器并将其从机架中移除。不幸的是,我们仍然需要该应用程序,不再积极使用,但需要可用于存档目的。该应用程序不支持比 v5.2 更新的 PHP 版本。出于这个原因,我想将应用程序托管在 docker 容器中(https://hub.docker.com/r/kuborgh/php-5.2/dockerfile)

我启动了容器,该容器docker run --publish 9090:80 --name xxproject -v /var/docker_mounts/xxproject:/project e28e8b71a1f7将我的应用程序文件夹(xxproject)安装在主机上/project,即容器中的文件夹。当我index.html在主机上的 xxproject 文件夹中添加一个并浏览到 ip:9090 时,我看到了预期的内容。

当我index.html与index.php文件(内容:)交换时,Hello world我得到了http 500响应。

当我检查 apache 错误日志(在容器中:)时,docker exec -it xxproject bash我没有得到任何条目。日志有效,因为我在进入网站时可以看到一个调试条目。

(/etc/apache2/sites-enabled/000-project.conf我LogLevel debug在容器中添加用于测试目的):

<VirtualHost *:80>
    ServerAdmin webmaster@localhost

    DocumentRoot /project
    <Directory /project>
        Options FollowSymLinks
        AllowOverride All
        Order allow,deny
        allow from all
    </Directory>
    LogLevel debug
</VirtualHost>

中唯一的日志条目/var/log/apache2/error.log

[Thu Aug 06 06:47:00 2020] [debug] mod_deflate.c(700): [client 10.128.10.41] Zlib: Compressed 0 to 2 : URL /index.php
[Thu Aug 06 06:47:01 2020] [debug] mod_deflate.c(700): [client 10.128.10.41] Zlib: Compressed 0 to 2 : URL /index.php

我在/var/log/apache2/other_vhost_names.log

172.17.0.3:80 10.128.10.41 - - [06/Aug/2020:06:47:00 +0000] "GET / HTTP/1.1" 500 375 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
172.17.0.3:80 10.128.10.41 - - [06/Aug/2020:06:47:01 +0000] "GET / HTTP/1.1" 500 375 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"

当我检查php -a交互式 php 终端按预期工作时。我怎样才能进一步调试这个问题?我知道 php 5.2 和 Ubuntu 12 已被弃用,不建议继续使用它们,但这是题外话!

=====更新 1 =====

apache 中的 PHP 配置(已经为相关的 docker 映像默认设置)。除了上面提到的更改之外,我没有对 docker 容器进行任何更改。

root@63ca87239042:/etc/apache2# grep -ri -B 2 -A 2 php .
./mods-enabled/dir.conf-<IfModule mod_dir.c>
./mods-enabled/dir.conf-
./mods-enabled/dir.conf:          DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
./mods-enabled/dir.conf-
./mods-enabled/dir.conf-</IfModule>
--
./mods-enabled/php5.load:LoadModule php5_module        /usr/lib/apache2/modules/libphp5.so
--
./mods-available/dir.conf-<IfModule mod_dir.c>
./mods-available/dir.conf-
./mods-available/dir.conf:          DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
./mods-available/dir.conf-
./mods-available/dir.conf-</IfModule>
--
./mods-available/php5.load:LoadModule php5_module        /usr/lib/apache2/modules/libphp5.so
--
./sites-available/default-ssl-  #     directives are used in per-directory context.
./sites-available/default-ssl-  #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
./sites-available/default-ssl:  <FilesMatch "\.(cgi|shtml|phtml|php)$">
./sites-available/default-ssl-          SSLOptions +StdEnvVars
./sites-available/default-ssl-  </FilesMatch>

php apache-2.2 web-server php5
  • 1 个回答
  • 3256 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve