我正在尝试在虚拟机上使用 centoOS 6.3 学习 drbd,我配置了两个 vm,node1 和 node2,我将文件复制到挂载点 /data,即 node1 的 /dev/drbd0,但不反映到node2的/数据
这是配置
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";
global {
# do not participate in online usage survey
usage-count no;
}
resource data {
# write IO is reported as completed if it has reached both local
# and remote disk
protocol C;
net {
# set up peer authentication
cram-hmac-alg sha1;
shared-secret "s3cr3tp@ss";
# default value 32 - increase as required
max-buffers 512;
# highest number of data blocks between two write barriers
max-epoch-size 512;
# size of the TCP socket send buffer - can tweak or set to 0 to
# allow kernel to autotune
sndbuf-size 0;
}
startup {
# wait for connection timeout - boot process blocked
# until DRBD resources are connected
wfc-timeout 30;
# WFC timeout if peer was outdated
outdated-wfc-timeout 20;
# WFC timeout if this node was in a degraded cluster (i.e. only had one
# node left)
degr-wfc-timeout 30;
}
disk {
# the next two are for safety - detach on I/O error
# and set up fencing - resource-only will attempt to
# reach the other node and fence via the fence-peer
# handler
#on-io-error detach;
#fencing resource-only;
# no-disk-flushes; # if we had battery-backed RAID
# no-md-flushes; # if we had battery-backed RAID
# ramp up the resync rate
# resync-rate 10M;
}
handlers {
# specify the two fencing handlers
# see: http://www.drbd.org/users-guide-8.4/s-pacemaker-fencing.html
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
}
# first node
on node1 {
# DRBD device
device /dev/drbd0;
# backing store device
disk /dev/sdb;
# IP address of node, and port to listen on
address 192.168.1.101:7789;
# use internal meta data (don't create a filesystem before
# you create metadata!)
meta-disk internal;
}
# second node
on node2 {
# DRBD debice
device /dev/drbd0;
# backing store device
disk /dev/sdb;
# IP address of node, and port to listen on
address 192.168.1.102:7789;
# use internal meta data (don't create a filesystem before
# you create metadata!)
meta-disk internal;
}
}
这是猫 /proc/drbd
cat: /proc/data: No such file or directory
[root@node1 /]# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:543648 nr:0 dw:265088 dr:280613 al:107 bm:25 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7848864
[>...................] sync'ed: 6.5% (7664/8188)M
finish: 7:47:11 speed: 272 (524) K/sec
我将一个文件复制到节点 1 中的 /data,但我在节点 2 的 /date 中找不到该文件,有人可以帮忙吗?
node1 上的 drbd 状态
[root@node1 /]# service drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43
m:res cs ro ds p mounted fstype
0:data SyncSource Primary/Secondary UpToDate/Inconsistent C /data ext3
... sync'ed: 8.1% (7536/8188)M
DRBD 表示分布式复制块设备。它不是文件系统。
如果您在主节点上写入文件,文件系统会发出写入操作。在下一层,DRBD 确保将这些写入复制到辅助节点。对于辅助节点,这些写入仅显示为数据块。为了让它查看文件,您通常必须在主节点上卸载分区并将其安装在辅助节点上。
不过,有一个解决方案可以解决您想要实现的目标。为此,您将需要一个集群文件系统。这样的文件系统允许您将分区同时安装在两个节点上。对于 ext4 等常用文件系统,这是不可能的。
工作在 DRBD 之上的这种集群文件系统的一个示例是 OCFS2。为了使用这个文件系统并同时在两台服务器上挂载分区,你的 DRBD 资源需要配置为双主模式。这意味着没有主节点。允许两个节点同时写入资源。集群文件系统确保写入的数据是一致的。
证明我错了,但是 IIRC 您只能同时在其中一个节点上安装 FS。让它们同步,卸载 /data。切换,将其安装在 node2 上,您应该会看到所有数据。