我有独立的 node1 已经运行了很长时间。现在我想将其转换为副本集。我按照https://docs.mongodb.org/v2.4/tutorial/convert-standalone-to-replica-set/中的步骤将独立转换为副本集。
在连接到主要的 mongo 客户端中添加成员后,命令行提示符从更改ReplicaSet0:PRIMARY>
为ReplicaSet0:SECONDARY>
. 然后我发现我的制作服务下降了。
我检查了我的哨兵(一个错误收集服务),发现我的 Ruby 代码抛出了很多错误:
Moped::Errors::ConnectionFailure: Could not connect to a primary node for replica set #<Moped::Cluster:28315820 @seeds=[<Moped::Node resolved_address="10.128.129.90:27017">, <Moped::Node resolved_address="10.128.130.139:27017">]>
这些是我的操作和 mongo 输出:
ReplicaSet0:PRIMARY> rs.add("node2")
{ "ok" : 1 }
ReplicaSet0:PRIMARY> rs.conf()
{
"_id" : "ReplicaSet0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "node1:27017"
},
{
"_id" : 1,
"host" : "node2:27017"
}
]
}
ReplicaSet0:PRIMARY> rs.status()
Thu Oct 22 15:40:13.762 DBClientCursor::init call() failed
Thu Oct 22 15:40:13.763 Error: error doing query: failed at src/mongo/shell/query.js:78
Thu Oct 22 15:40:13.763 trying reconnect to 127.0.0.1:27017
Thu Oct 22 15:40:13.764 reconnect 127.0.0.1:27017 ok
ReplicaSet0:SECONDARY>
可以看到PRIMARY
变成了SECONDARY
。为什么会这样?我认为这导致我的服务下降。我该如何避免呢?请帮我。
更新0:
mongo.conf(是的,就是这样。)
dbpath=/data/mongodb
logpath=/var/log/mongodb/mongodb.log
logappend=true
bind_ip = 0.0.0.0
journal=true
replSet=ReplicaSet0
更新1:rs.status()
ReplicaSet0:SECONDARY> rs.status()
{
"set" : "ReplicaSet0",
"date" : ISODate("2015-10-22T07:58:14Z"),
"myState" : 2,
"members" : [
{
"_id" : 0,
"name" : "kuankr:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2463,
"optime" : Timestamp(1445499598, 19),
"optimeDate" : ISODate("2015-10-22T07:39:58Z"),
"self" : true
},
{
"_id" : 1,
"name" : "mongo-primary:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-10-22T07:58:13Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0
}
],
"ok" : 1
}
从 mongodb.log 中挑选一些相关的行
34325 Thu Oct 22 15:39:52.139 [conn397] replSet replSetReconfig config object parses ok, 2 members specified
34326 Thu Oct 22 15:39:54.599 [conn397] replSet replSetReconfig [2]
34327 Thu Oct 22 15:39:54.599 [conn397] replSet info saving a newer config version to local.system.replset
34328 Thu Oct 22 15:39:54.607 [conn397] replSet saveConfigLocally done
34329 Thu Oct 22 15:39:54.607 [conn397] replSet info : additive change to configuration
34330 Thu Oct 22 15:39:54.607 [conn397] replSet replSetReconfig new config saved locally
34331 Thu Oct 22 15:39:54.607 [conn397] command admin.$cmd command: { replSetReconfig: { _id: "ReplicaSet0", version: 2, members: [ { _id: 0, host: "kuankr:27017" }, { _id: 1.0, host: "mongo-primary" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:8249 reslen:37 2467ms
34332 Thu Oct 22 15:39:54.612 [rsHealthPoll] replSet member mongo-primary:27017 is up
34333 Thu Oct 22 15:39:54.612 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
34334 Thu Oct 22 15:40:08.610 [rsHealthPoll] DBClientCursor::init call() failed
34335 Thu Oct 22 15:40:08.750 [rsHealthPoll] replSet info mongo-primary:27017 is down (or slow to respond):
34336 Thu Oct 22 15:40:08.750 [rsHealthPoll] replSet member mongo-primary:27017 is now in state DOWN
34337 Thu Oct 22 15:40:08.750 [rsMgr] can't see a majority of the set, relinquishing primary
34338 Thu Oct 22 15:40:08.750 [rsMgr] replSet relinquishing primary state
34339 Thu Oct 22 15:40:08.750 [rsMgr] replSet SECONDARY
34340 Thu Oct 22 15:40:08.750 [rsMgr] replSet closing client sockets after relinquishing primary
34341 Thu Oct 22 15:40:08.751 [conn4] end connection 10.128.132.214:47738 (61 connections now open) *
34402 Thu Oct 22 15:40:08.755 [conn385] end connection 127.0.0.1:35975 (1 connection now open)
34414 Thu Oct 22 15:40:15.895 [rsMgr] replSet info electSelf 0
34415 Thu Oct 22 15:40:15.896 [rsMgr] replSet couldn't elect self, only received 1 votes
34425 Thu Oct 22 15:40:21.897 [rsMgr] replSet info electSelf 0
34426 Thu Oct 22 15:40:21.897 [rsMgr] replSet couldn't elect self, only received 1 votes
34465 Thu Oct 22 15:40:35.897 [rsHealthPoll] DBClientCursor::init call() failed
34466 Thu Oct 22 15:40:35.898 [rsHealthPoll] replSet info mongo-primary:27017 is down (or slow to respond):
34467 Thu Oct 22 15:40:35.898 [rsMgr] replSet can't see a majority, will not try to elect self
34503 Thu Oct 22 15:40:43.899 [rsHealthPoll] replSet member mongo-primary:27017 is up
34504 Thu Oct 22 15:40:43.899 [rsMgr] replSet info electSelf 0
34505 Thu Oct 22 15:40:43.900 [rsMgr] replSet couldn't elect self, only received 1 votes
基本上,评论中的场景就是这里发生的事情。您已向集合 (mongo-primary) 添加了一个新主机,并且无法从您的原始主机 (kuankr) 访问该主机。这意味着您有一个包含 2 个主机的副本集,但只有一个健康的。发生这种情况时,您无法满足选举初选的要求 - 即需要超过 50% 的选票(或严格多数)才能选举初选。
在 2 节点集中,两个节点都必须可用并投票选举主节点。在 3 个节点的集合中,您需要 3 个中的 2 个,在 4 个节点的集合中,您需要 4 个中的 3 个,在 5 个节点的集合中,您需要 5 个中的 3 个,依此类推。
这就是为什么总是建议您的集合中有奇数个节点的原因。我建议添加一个可以由您的原始主节点访问的仲裁器,以便可以再次选举它。然后,在解决当前问题后,找出原始主节点无法与新节点通信的原因(最常见的问题:防火墙、路由、新节点上的绑定 IP 不正确)。
根据评论更新:
如果助手不能在辅助设备上工作,要强制添加,那么您可以执行以下操作:
您也可以使用此类似过程删除“坏”节点