我已经设置了一个 MongoDB 分片集群。它有三个分片副本,每个副本有三个 mongo 实例。它有 3 个 mongo 配置服务器的一个副本。还有一个mongos。一开始它工作正常,但运行几天后配置副本连接失败。当我登录到每个配置服务器 mongo 实例时,以下是命令rs.status()
输出:
配置服务器1:
OTHER> rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 121353,
"optime" : {
"ts" : Timestamp(1504367995, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-09-02T15:59:55Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
配置服务器 2:
OTHER> rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 121421,
"optime" : {
"ts" : Timestamp(1504367995, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-09-02T15:59:55Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
配置服务器 3:
SECONDARY> rs.status()
{
"set" : "cnf-serv",
"date" : ISODate("2017-09-04T01:45:05.842Z"),
"myState" : 2,
"term" : NumberLong(3),
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"appliedOpTime" : {
"ts" : Timestamp(1504367995, 1),
"t" : NumberLong(3)
},
"durableOpTime" : {
"ts" : Timestamp(1504367995, 1),
"t" : NumberLong(3)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.19.0.10:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 121454,
"optime" : {
"ts" : Timestamp(1504367995, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-09-02T15:59:55Z"),
"configVersion" : 403866,
"self" : true
},
{
"_id" : 1,
"name" : "172.19.0.7:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-09-04T01:45:02.312Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 2,
"name" : "172.19.0.4:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-09-04T01:45:02.310Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
}
],
"ok" : 1
}
看起来前两个实例已被删除,第三个配置服务器是次要的。根据我的理解,如果副本中有一个实例关闭,则应选择另一个健康的实例成为主实例。为什么第三个实例没有成为我的副本中的主实例?
所有 mongo 实例都在使用 version 3.4.4
。
下面是我用来启动 mongod 配置服务器的命令:
mongod --replSet cnf-serv --rest --configsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
仅供参考,从前两个实例日志中,我看到以下错误消息:
2017-09-04T01:39:23.006+0000 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 134 could not get updated shard list from config server due to Read concern majority reads are currently not possible.; will retry after 30s
2017-09-04T01:39:53.006+0000 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 134 could not get updated shard list from config server due to Read concern majority reads are currently not possible.; will retry after 30s
2017-09-04T01:40:23.006+0000 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 134 could not get updated shard list from config server due to Read concern majority reads are currently not possible.; will retry after 30s
2017-09-04T01:40:53.006+0000 I SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 134 could not get updated shard list from config server due to Read concern majority reads are currently not possible.; will retry after 30s
好的..第三台服务器是次要服务器而不是主要服务器的原因是“多数”。当三台服务器中只有一台“启动”时,那一台不是大多数服务器。所以,如果你“失去”了三台服务器中的一台,剩下的两台都可以,因为 2/3 是大多数。
然后回到基础问题。看起来这两个处于“其他”状态的服务器现在的地址/名称与您将它们添加到副本集时的地址/名称不同。检查这两个服务器是否仍然解析副本集配置中使用的相同 IP 地址(或 DNS 名称)。
您可以尝试在从可用的一个辅助成员重新配置时使用 force:true,方法是为其他两个成员提供正确的 IP。