我有一个将自身安装到/opt/my_app/
目录中的应用程序。现在我想在一个集群中设置两台服务器(主动 - 被动)并将整个目录与 DRBD 同步。现在据我了解,DRBD 需要一个块设备。所以我会添加一个新的虚拟磁盘(都是 ESX 虚拟机)创建一个分区,接下来是一个物理卷、卷组和一个逻辑卷。但是我的问题是在技术上可以将 /opt/my_app/ 放在 DRBD 设备上并在两个节点之间同步吗?
编辑:
[root@server2 otrs]# pcs config
Cluster Name: otrs_cluster
Corosync Nodes:
server1 server2
Pacemaker Nodes:
server1 server2
Resources:
Group: OTRS
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: cidr_netmask=8 ip=10.0.0.60
Operations: monitor interval=20s (ClusterIP-monitor-interval-20s)
start interval=0s timeout=20s (ClusterIP-start-interval-0s)
stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
Resource: otrs_file_system (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd0 directory=/opt/otrs/ fstype=ext4
Operations: monitor interval=20 timeout=40 (otrs_file_system-monitor-interval-20)
start interval=0s timeout=60 (otrs_file_system-start-interval-0s)
stop interval=0s timeout=60 (otrs_file_system-stop-interval-0s)
Master: otrs_data_clone
Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 clone-node-max=1
Resource: otrs_data (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=otrs
Operations: demote interval=0s timeout=90 (otrs_data-demote-interval-0s)
monitor interval=30s (otrs_data-monitor-interval-30s)
promote interval=0s timeout=90 (otrs_data-promote-interval-0s)
start interval=0s timeout=240 (otrs_data-start-interval-0s)
stop interval=0s timeout=100 (otrs_data-stop-interval-0s)
Stonith Devices:
Fencing Levels:
Location Constraints:
Resource: ClusterIP
Enabled on: server1 (score:INFINITY) (role: Started) (id:cli-prefer-ClusterIP)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
No defaults set
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: otrs_cluster
dc-version: 1.1.16-12.el7_4.8-94ff4df
have-watchdog: false
last-lrm-refresh: 1525108871
stonith-enabled: false
Quorum:
Options:
[root@server2 otrs]#
[root@server2 otrs]# pcs status
Cluster name: otrs_cluster
Stack: corosync
Current DC: server1 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Mon Apr 30 14:11:54 2018
Last change: Mon Apr 30 13:27:47 2018 by root via crm_resource on server2
2 nodes configured
4 resources configured
Online: [ server1 server2 ]
Full list of resources:
Resource Group: OTRS
ClusterIP (ocf::heartbeat:IPaddr2): Started server2
otrs_file_system (ocf::heartbeat:Filesystem): Started server2
Master/Slave Set: otrs_data_clone [otrs_data]
Masters: [ server2 ]
Slaves: [ server1 ]
Failed Actions:
* otrs_file_system_start_0 on server1 'unknown error' (1): call=78, status=complete, exitreason='Couldn't mount filesystem /dev/drbd0 on /opt/otrs',
last-rc-change='Mon Apr 30 13:21:13 2018', queued=0ms, exec=151ms
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@server2 otrs]#
这当然是可能的。
添加块设备并创建 LVM 以支持 DRBD 设备后,您将配置和初始化 DRBD 设备(
drbdadm create-md <res>
和drbdadm up <res>
.将一个节点提升为主节点(注意:您只需在第一次提升设备时强制主节点,因为您有
Inconsistent/Inconsistent
磁盘状态):drbdadm primary <res> --force
然后,您可以在设备上放置一个文件系统并将其安装在系统上的任何位置,包括
/opt/my_app
,就像使用普通块设备一样。如果存在
/opt/my_app/
需要移动到 DRBD 设备的现有数据,则可以将设备挂载到其他位置,将数据从挂载点移动/复制/opt/my_app/
到挂载点,然后重新挂载 DRBD 设备,/opt/myapp
或者您可以使用符号链接指向/opt/my_app
DRBD 设备的挂载点。编辑后更新答案:
您需要在集群配置中添加托管和排序约束,以告知
OTRS
资源组仅在 DRBD Master 上运行,并且仅在 DRBD Master 提升后启动。这些命令应该添加这些约束: