AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / dba / 问题

问题[percona](dba)

Martin Hope
laimison
Asked: 2023-01-12 17:31:01 +0800 CST

命令:apply_migration,originalError:错误 1845:此操作不支持 LOCK=NONE。试试锁=共享

  • 5

我在应用架构迁移期间遇到此错误

{"timestamp":"2023-01-11 11:53:09.043 Z","level":"fatal","msg":"Failed to apply database migrations.","caller":"sqlstore/store.go:169","error":"driver: mysql, message: failed when applying migration, command: apply_migration, originalError: Error 1845: LOCK=NONE is not supported for this operation. Try LOCK=SHARED., query: \n\nSET @preparedStatement = (SELECT IF(\n    (\n        SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS\n        WHERE table_name = 'Posts'\n        AND table_schema = DATABASE()\n        AND index_name = 'idx_posts_create_at_id'\n    ) > 0,\n    'SELECT 1;',\n    'CREATE INDEX idx_posts_create_at_id on Posts(CreateAt, Id) LOCK=NONE;'\n));\n\nPREPARE createIndexIfNotExists FROM @preparedStatement;\nEXECUTE createIndexIfNotExists;\nDEALLOCATE PREPARE createIndexIfNotExists;\n\n"}

我是否必须像往常一样应用新模式和使用应用程序?

谢谢

percona
  • 1 个回答
  • 16 Views
Martin Hope
Mike
Asked: 2022-05-14 22:31:56 +0800 CST

Percona MySQL 5.7 到 8.0

  • 0

我想知道是否有人成功地将 percona 5.7 升级到 8.0 并具有完全填充的数据库。我读过的所有文章都表明这是可能的,但我在测试环境中尝试了两次,当升级发生时,它在服务启动期间停止并且从未真正给出真正的指示。

这是一个大型多数据库服务器,超过 100 个数据库,超过 150 GB。

我想搞定这个过程,因为当它上线时,服务器位于复制集群中。我已经读过从奴隶开始,这很好,但即使这样也表明我应该能够用数据升级。

当检查表可以升级时,它们都返回 OK。

mysql-5.7 percona
  • 1 个回答
  • 90 Views
Martin Hope
ParoX
Asked: 2020-12-11 09:53:17 +0800 CST

我可以禁用二进制日志以暂时节省空间吗

  • 2

我正在使用 percona mysql 8。

我不使用任何类型的复制,但我读到二进制日志对于数据恢复很有用。我想在运行pt-online-schema-change不影响 OPTIMIZE 表时关闭 binlogging 并刷新它们。

完成后,我想重新打开 binlogging(然后努力移动到有更多空间的服务器)。

这是安全和推荐的吗?我需要优化一个表并且不能脱机,除非我删除 50GB 的二进制日志,否则制作表的副本会使我空间不足

mysql percona
  • 3 个回答
  • 1489 Views
Martin Hope
GuruBob
Asked: 2019-06-02 17:31:07 +0800 CST

使用 percona 和 docker 的 MySQL 复制从站

  • 0

我正在尝试在 docker 容器中运行 MySQL 复制从属。我们在生产环境中运行 MySQL 5.7.24-27-log,它来自 percona 存储库(Ubuntu 18.04)。

我曾经xtrabackup备份、准备和发送一个用于复制的起始数据集,然后我docker pull percona像这样启动了 percona docker 映像():

$ docker run --name mysql-replication -v /replication/data:/var/lib/mysql -v /replication/docker.cnf:/etc/mysql/docker.cnf:ro -e MYSQL_ROOT_PASSWORD=xxxx -P -d percona

我的 docker.cnf 只记录了服务器 ID(我从percona图像中复制了它)。

[mysqld]
skip-host-cache
skip-name-resolve
bind-address    = 0.0.0.0
server-id       = 4

然后使用CHANGE MASTER等。我的复制运行得很好。

我的意图(根据卷挂载-v /replication/data:/var/lib/mysql)是将所有 MySQL 数据保留在主机上,并将复制 docker 容器视为短暂的,即容器中没有状态。server-id如果我需要通过停止现有容器、将数据复制到别处、更改并运行新容器来启动另一个复制容器,也应该很容易。

为了测试这一点,在它设置并正常运行后(我看了Seconds_Behind_Master下拉到0),我想我应该能够删除容器并重新创建它,并且复制仍然可以正常工作。因此我尝试了这个:

$ docker stop mysql-replication
$ docker rm mysql-replication
$ docker run ... // same command as before

当我这样做并连接到在容器中运行的 MySQL 时,我发现它Slave_IO_Running是No,并且在启动它 ( START SLAVE;) 后,我得到以下信息(如中所示SHOW SLAVE STATUS;):

Last_Error: Could not execute Update_rows event on table databasename.tablename; Can't find record in 'tablename', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000681, end_log_pos 9952

(databasename并且tablename是真实的数据库和表名)

起初我以为我可能搞砸了,但我现在已经尝试了很多次来尝试解决问题。使用docker diff mysql-replication显示对正在运行的容器没有任何变化,这似乎很重要:

$ docker diff mysql-replication 
C /run
C /run/mysqld
A /run/mysqld/mysqld.pid
C /var
C /var/log
A /var/log/mysql

谷歌搜索建议我需要使用RESET SLAVE;,START SLAVE;但这似乎无法解决它 - 就像数据(容器外)不再与主服务器同步,因此复制无法继续。

任何人都可以在我正在做的事情中找出漏洞吗?

非常感谢。

replication percona
  • 1 个回答
  • 565 Views
Martin Hope
Viet
Asked: 2019-03-07 02:29:27 +0800 CST

Percona Xtradb 集群 56 节点 2 重新加入集群失败

  • 0

我有以下数据库集群:

节点 1:Percona-XtraDB-Cluster-56-5.6.39-26.25.1

节点 2:Percona-XtraDB-Cluster-56-5.6.39-26.25.1

节点 3:Percona-XtraDB-Cluster-56-5.6.41-28.28.1

我按顺序启动节点1、节点2、节点3。集群启动并正常工作。但是现在我的 node1 然后 node2 崩溃了。我的集群(仅限节点 3)继续正常工作。

但我无法启动其他节点(如节点 2)并同步到集群。我曾尝试清除数据库数据目录,然后从头开始同步,但仍然失败。

以下是错误日志:

节点2:服务mysql启动

2019-03-06 16:40:32 1007281 [Note] WSREP: Setting wsrep_ready to false
2019-03-06 16:40:32 1007281 [Note] WSREP: Read nil XID from storage engines, skipping position init
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/libgalera_smm.so'
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_load(): Galera 3.25(rac090bc) by Codership Oy <[email protected]> loaded successfully.
2019-03-06 16:40:32 1007281 [Note] WSREP: CRC-32C: using hardware acceleration.
2019-03-06 16:40:32 1007281 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootstrap: 1
2019-03-06 16:40:32 1007281 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 10.58.49.161; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT
2019-03-06 16:40:32 1007281 [Note] WSREP: GCache history reset: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:0 -> 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:32 1007281 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_sst_grab()
2019-03-06 16:40:32 1007281 [Note] WSREP: Start replication
2019-03-06 16:40:32 1007281 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:32 1007281 [Note] WSREP: protonet asio version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: Using CRC-32C for message checksums.
2019-03-06 16:40:32 1007281 [Note] WSREP: backend: asio
2019-03-06 16:40:32 1007281 [Note] WSREP: gcomm thread scheduling priority set to other:0 
2019-03-06 16:40:32 1007281 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2019-03-06 16:40:32 1007281 [Note] WSREP: restore pc from disk failed
2019-03-06 16:40:32 1007281 [Note] WSREP: GMCast version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2019-03-06 16:40:32 1007281 [Note] WSREP: EVS version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: gcomm: connecting to group 'my_centos_cluster', peer '10.58.49.162:'
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') connection established to 879d48d1 tcp://10.58.49.162:4567
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
2019-03-06 16:40:32 1007281 [Note] WSREP: declaring 879d48d1 at tcp://10.58.49.162:4567 stable
2019-03-06 16:40:32 1007281 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:32 1007281 [Note] WSREP: view(view_id(PRIM,879d48d1,22) memb {
    879d48d1,0
    e27e1564,0
} joined {
} left {
} partitioned {
})
2019-03-06 16:40:32 1007281 [Note] WSREP: save pc into disk
2019-03-06 16:40:33 1007281 [Note] WSREP: gcomm: connected
2019-03-06 16:40:33 1007281 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2019-03-06 16:40:33 1007281 [Note] WSREP: Opened channel 'my_centos_cluster'
2019-03-06 16:40:33 1007281 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
2019-03-06 16:40:33 1007281 [Note] WSREP: Waiting for SST to complete.
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: sent state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 0 (v-connect-03)
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 1 (localhost.localdomain)
2019-03-06 16:40:33 1007281 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 21,
    members    = 1/2 (joined/total),
    act_id     = 143386057,
    last_appl. = -1,
    protocols  = 0/8/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:33 1007281 [Note] WSREP: Flow-control interval: [23, 23]
2019-03-06 16:40:33 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 143386057)
2019-03-06 16:40:33 1007281 [Note] WSREP: State transfer required: 
    Group state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057
    Local state: 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:33 1007281 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057, view# 22: Primary, number of nodes: 2, my index: 1, protocol version 3
2019-03-06 16:40:33 1007281 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:33 1007281 [Warning] WSREP: Gap in state sequence. Need state transfer.
2019-03-06 16:40:33 1007281 [Note] WSREP: Setting wsrep_ready to false
2019-03-06 16:40:33 1007281 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.58.49.161' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1007281'  '' '
WSREP_SST: [INFO] Streaming with xbstream (2019-03-06 16:40:33)
WSREP_SST: [INFO] Using socat as streamer (2019-03-06 16:40:33)
WSREP_SST: [INFO] Stale sst_in_progress file: /var/lib/mysql//sst_in_progress (2019-03-06 16:40:33)
WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr,retry=30 stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:33)
2019-03-06 16:40:33 1007281 [Note] WSREP: Prepared SST request: xtrabackup-v2|10.58.49.161:4444/xtrabackup_sst//1
2019-03-06 16:40:33 1007281 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 1007281 [Note] WSREP: REPL Protocols: 8 (3, 2)
2019-03-06 16:40:33 1007281 [Note] WSREP: Assign initial position for certification: 143386057, protocol version: 3
2019-03-06 16:40:33 1007281 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:33 1007281 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (773c5ba0-1f0e-11e8-8359-366569ddd6b6): 1 (Operation not permitted)
     at galera/src/replicator_str.cpp:prepare_for_IST():535. IST will be unavailable.
2019-03-06 16:40:33 1007281 [Note] WSREP: Member 1.0 (localhost.localdomain) requested state transfer from '*any*'. Selected 0.0 (v-connect-03)(SYNCED) as donor.
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 143386070)
2019-03-06 16:40:33 1007281 [Note] WSREP: Requesting state transfer: success, donor: 0
2019-03-06 16:40:33 1007281 [Note] WSREP: GCache history reset: 00000000-0000-0000-0000-000000000000:0 -> 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057
WSREP_SST: [ERROR] Cleanup after exit with status:1 (2019-03-06 16:40:33)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.58.49.161' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1007281'  '' : 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2019-03-06 16:40:34 1007281 [ERROR] WSREP: SST script aborted with error 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] Aborting

2019-03-06 16:40:34 1007281 [Note] WSREP: Signalling cancellation of the SST request.
2019-03-06 16:40:34 1007281 [Note] WSREP: SST request was cancelled
2019-03-06 16:40:34 1007281 [Note] WSREP: Closing send monitor...
2019-03-06 16:40:34 1007281 [Note] WSREP: Closed send monitor.
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: terminating thread
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: joining thread
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: closing backend
2019-03-06 16:40:35 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting off
2019-03-06 16:40:36 1007281 [Note] WSREP: Service disconnected.
2019-03-06 16:40:36 1007281 [Note] WSREP: Waiting to close threads......
2019-03-06 16:40:36 1007281 [Note] WSREP: rollbacker thread exiting
2019-03-06 16:40:37 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') connection to peer 879d48d1 with addr tcp://10.58.49.162:4567 timed out, no messages seen in PT3S
2019-03-06 16:40:37 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.58.49.162:4567 
2019-03-06 16:40:38 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') reconnecting to 879d48d1 (tcp://10.58.49.162:4567), attempt 0
2019-03-06 16:40:39 1007281 [Note] WSREP: evs::proto(e27e1564, LEAVING, view_id(REG,879d48d1,22)) suspecting node: 879d48d1
2019-03-06 16:40:39 1007281 [Note] WSREP: evs::proto(e27e1564, LEAVING, view_id(REG,879d48d1,22)) suspected node without join message, declaring inactive
2019-03-06 16:40:39 1007281 [Note] WSREP: view(view_id(NON_PRIM,879d48d1,22) memb {
    e27e1564,0
} joined {
} left {
} partitioned {
    879d48d1,0
})
2019-03-06 16:40:39 1007281 [Note] WSREP: view((empty))
2019-03-06 16:40:39 1007281 [Note] WSREP: gcomm: closed
2019-03-06 16:40:39 1007281 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2019-03-06 16:40:39 1007281 [Note] WSREP: Flow-control interval: [16, 16]
2019-03-06 16:40:39 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:39 1007281 [Note] WSREP: Received NON-PRIMARY.
2019-03-06 16:40:39 1007281 [Note] WSREP: Shifting JOINER -> OPEN (TO: 143386078)
2019-03-06 16:40:39 1007281 [Note] WSREP: Received self-leave message.
2019-03-06 16:40:39 1007281 [Note] WSREP: Flow-control interval: [0, 0]
2019-03-06 16:40:39 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:39 1007281 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2019-03-06 16:40:39 1007281 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 143386078)
2019-03-06 16:40:39 1007281 [Note] WSREP: RECV thread exiting 0: Success
2019-03-06 16:40:39 1007281 [Note] WSREP: recv_thread() joined.
2019-03-06 16:40:39 1007281 [Note] WSREP: Closing replication queue.
2019-03-06 16:40:39 1007281 [Note] WSREP: Closing slave action queue.
2019-03-06 16:40:39 1007281 [ERROR] WSREP: Certification exception: Unsupported key prefix: : 71 (Protocol error)
     at galera/src/key_set.cpp:throw_bad_prefix():152
2019-03-06 16:40:39 1007281 [Note] WSREP: /usr/sbin/mysqld: Terminated.

节点 3:当前正在运行

2019-03-06 16:40:32 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') connection established to e27e1564 tcp://10.58.49.161:4567
2019-03-06 16:40:32 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
2019-03-06 16:40:32 28615 [Note] WSREP: declaring e27e1564 at tcp://10.58.49.161:4567 stable
2019-03-06 16:40:32 28615 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:32 28615 [Note] WSREP: view(view_id(PRIM,879d48d1,22) memb {
    879d48d1,0
    e27e1564,0
} joined {
} left {
} partitioned {
})
2019-03-06 16:40:32 28615 [Note] WSREP: save pc into disk
2019-03-06 16:40:32 28615 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
2019-03-06 16:40:32 28615 [Note] WSREP: STATE_EXCHANGE: sent state UUID: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:32 28615 [Note] WSREP: STATE EXCHANGE: sent state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:32 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 0 (v-connect-03)
2019-03-06 16:40:33 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 1 (localhost.localdomain)
2019-03-06 16:40:33 28615 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 21,
    members    = 1/2 (joined/total),
    act_id     = 143386057,
    last_appl. = 143386003,
    protocols  = 0/8/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:33 28615 [Note] WSREP: Flow-control interval: [23, 23]
2019-03-06 16:40:33 28615 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:33 28615 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057, view# 22: Primary, number of nodes: 2, my index: 0, protocol version 3
2019-03-06 16:40:33 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:33 28615 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 1 -> 2)
2019-03-06 16:40:33 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 28615 [Note] WSREP: REPL Protocols: 8 (3, 2)
2019-03-06 16:40:33 28615 [Note] WSREP: Assign initial position for certification: 143386057, protocol version: 3
2019-03-06 16:40:33 28615 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Note] WSREP: Member 1.0 (localhost.localdomain) requested state transfer from '*any*'. Selected 0.0 (v-connect-03)(SYNCED) as donor.
2019-03-06 16:40:33 28615 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 143386070)
2019-03-06 16:40:33 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 28615 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' '
2019-03-06 16:40:33 28615 [Note] WSREP: sst_donor_thread signaled with 0
WSREP_SST: [INFO] Streaming with xbstream (2019-03-06 16:40:33)
WSREP_SST: [INFO] Using socat as streamer (2019-03-06 16:40:33)
WSREP_SST: [INFO] Streaming SST meta-info file before SST (2019-03-06 16:40:33)
WSREP_SST: [INFO] Evaluating xbstream -c ${FILE_TO_STREAM} | socat -u stdio TCP:10.58.49.161:4444,retry=30; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:33)
WSREP_SST: [INFO] Sleeping before data transfer for SST (2019-03-06 16:40:33)
2019-03-06 16:40:35 28615 [Note] WSREP: forgetting e27e1564 (tcp://10.58.49.161:4567)
2019-03-06 16:40:35 28615 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:35 28615 [Note] WSREP: view(view_id(PRIM,879d48d1,23) memb {
    879d48d1,0
} joined {
} left {
} partitioned {
    e27e1564,0
})
2019-03-06 16:40:35 28615 [Note] WSREP: save pc into disk
2019-03-06 16:40:35 28615 [Note] WSREP: forgetting e27e1564 (tcp://10.58.49.161:4567)
2019-03-06 16:40:35 28615 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
2019-03-06 16:40:35 28615 [Note] WSREP: STATE_EXCHANGE: sent state UUID: e43901d6-3ff3-11e9-bbc9-83ea4282ab29
2019-03-06 16:40:35 28615 [Note] WSREP: STATE EXCHANGE: sent state msg: e43901d6-3ff3-11e9-bbc9-83ea4282ab29
2019-03-06 16:40:35 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e43901d6-3ff3-11e9-bbc9-83ea4282ab29 from 0 (v-connect-03)
2019-03-06 16:40:35 28615 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 22,
    members    = 1/1 (joined/total),
    act_id     = 143386078,
    last_appl. = 143386003,
    protocols  = 0/9/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:35 28615 [Note] WSREP: Flow-control interval: [16, 16]
2019-03-06 16:40:35 28615 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:35 28615 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386078, view# 23: Primary, number of nodes: 1, my index: 0, protocol version 3
2019-03-06 16:40:35 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:35 28615 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 2 -> 1)
2019-03-06 16:40:35 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:35 28615 [Note] WSREP: REPL Protocols: 9 (4, 2)
2019-03-06 16:40:35 28615 [Note] WSREP: Assign initial position for certification: 143386078, protocol version: 4
2019-03-06 16:40:35 28615 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:35 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') turning message relay requesting off
2019-03-06 16:40:38 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') connection established to e27e1564 tcp://10.58.49.161:4567
2019-03-06 16:40:38 28615 [Warning] WSREP: discarding established (time wait) e27e1564 (tcp://10.58.49.161:4567) 
2019-03-06 16:40:40 28615 [Note] WSREP:  cleaning up e27e1564 (tcp://10.58.49.161:4567)
WSREP_SST: [INFO] Streaming the backup to joiner at 10.58.49.161 4444 (2019-03-06 16:40:43)
WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/etc/my.cnf  --defaults-group=mysqld --no-version-check  $INNOEXTRA --galera-info --stream=$sfmt $itmpdir 2>${DATA}/innobackup.backup.log | socat -u stdio TCP:10.58.49.161:4444,retry=30; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:43)
2019/03/06 16:41:13 socat[24873] E connect(3, AF=2 10.58.49.161:4444, 16): Connection refused
WSREP_SST: [ERROR] innobackupex finished with error: 1.  Check /var/lib/mysql//innobackup.backup.log (2019-03-06 16:41:14)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (2019-03-06 16:41:14)
WSREP_SST: [INFO] Cleaning up temporary directories (2019-03-06 16:41:14)
2019-03-06 16:41:14 28615 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' 
2019-03-06 16:41:14 28615 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' : 22 (Invalid argument)
2019-03-06 16:41:14 28615 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' 
2019-03-06 16:41:14 28615 [Warning] WSREP: Could not find peer: e27e1564-3ff3-11e9-8f94-aa1e9dd03b7f
2019-03-06 16:41:14 28615 [Warning] WSREP: 0.0 (v-connect-03): State transfer to -1.-1 (left the group) failed: -22 (Invalid argument)
2019-03-06 16:41:14 28615 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 143386421)
2019-03-06 16:41:14 28615 [Note] WSREP: Member 0.0 (v-connect-03) synced with group.
2019-03-06 16:41:14 28615 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 143386421)
2019-03-06 16:41:14 28615 [Note] WSREP: Synchronized with group, ready for connections
2019-03-06 16:41:14 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:41:14 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

日志:/var/lib/mysql//innobackup.backup.log

190306 16:40:43 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
           At the end of a successful backup run innobackupex
           prints "completed OK!".

190306 16:40:43 Connecting to MySQL server host: localhost, user: sstuser, password: set, port: not set, socket: /var/lib/mysql/mysql.sock
Using server version 5.6.41-84.1-56
innobackupex version 2.3.10 based on MySQL server 5.6.24 Linux (x86_64) (revision id: bd0d4403f36)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql/
xtrabackup: open files limit requested 65535, set to 65535
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 536870912
xtrabackup: using O_DIRECT
innobackupex: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
xtrabackup: Error: write to logfile failed
innobackupex: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
xtrabackup: Error: xtrabackup_copy_logfile() failed.

如何将节点 2 加入集群?

mysql percona
  • 2 个回答
  • 938 Views
Martin Hope
The Georgia
Asked: 2018-12-24 18:57:15 +0800 CST

使用 SSSD 进行 AD 身份验证的 Percona PAM

  • 0

我已经在我的 Percona 服务器上安装了 percona PAm 插件,如下所示:

mysql> show plugins;
...
| auth_pam                      | ACTIVE   | AUTHENTICATION     | auth_pam.so        | GPL     |
| auth_pam_compat               | ACTIVE   | AUTHENTICATION     | auth_pam_compat.so | GPL     |
+-------------------------------+----------+--------------------+--------------------+---------+

并且还配置了这个:

cat /etc/pam.d/mysqld 
auth required pam_sss.so
account required pam_sss.so

我在 AD 服务器上有一个名为“dba”的组,并在该组中添加了一个 AD 用户“john.d”。所以我想使用 Ad 用户登录 MySQL,例如 john.d,他也应该继承授予“dba”组的所有特权。下面是这个 AD 组“dba”是如何允许其用户访问 Percona 服务器的设置:

CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld,dba=dbarole';
CREATE USER 'dbarole'@'%' IDENTIFIED BY 'dbapass';
GRANT ALL PRIVILEGES ON *.* TO 'dbarole'@'%';
GRANT PROXY ON 'dbarole'@'%' TO ''@'';

当我以 dbarole 身份登录到 mysql 时,一切都适用于所有授予的权限。但是,当我以 john.d(包含在“dba”AD 组中的 AD 用户之一)身份登录时,该用户不继承授予其组的权限(ALL),而仅具有 USAGE 权限,如下所示:

mysql> show grants;
+-----------------------------------+
| Grants for @                      |
+-----------------------------------+
| GRANT USAGE ON *.* TO ''@''       |
| GRANT PROXY ON 'dba'@'%' TO ''@'' |
+-----------------------------------+
2 rows in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
+--------------------+
1 row in set (0.01 sec)

我的问题是,如何让 AD 用户继承在 MySQL 中授予其组的权限?

mysql percona
  • 1 个回答
  • 697 Views
Martin Hope
The Georgia
Asked: 2018-10-12 02:04:11 +0800 CST

MySQL PXC 节点无法接收状态

  • 1

我有三个节点,我想将它们设置到 Percona XtraDB 集群 (PXC) 中。我已经启动了第一个节点并加入了第二个节点,但无法以某种方式加入第三个节点。所有配置都和我刚才复制粘贴的一样:

[mysqld]
# Galera
wsrep_cluster_address = gcomm://10.1.5.100,10.1.5.101,10.1.5.102
wsrep_cluster_name = db-test
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_provider_options = "gcache.size=256M"
wsrep_slave_threads = 16 # 2~3 times with CPU
wsrep_sst_auth = "sstuser:sstPwd#123"
wsrep_sst_method = xtrabackup-v2

我在 CentOS 7.x 上运行节点。下面是已经启动并运行的两个 PXC 节点的状态:

| wsrep_ist_receive_seqno_end      | 0                                       |
| wsrep_incoming_addresses         | 10.1.5.100:3306,10.1.5.101:3306 |
| wsrep_cluster_weight             | 2                                       |
| wsrep_desync_count               | 0                                       |
| wsrep_evs_delayed                |                                         |
| wsrep_evs_evict_list             |                                         |
| wsrep_evs_repl_latency           | 0/0/0/0/0                               |
| wsrep_evs_state                  | OPERATIONAL                             |
| wsrep_gcomm_uuid                 | 8d59ca0f-cd35-11e8-863c-d79869fa6d80    |
| wsrep_cluster_conf_id            | 4                                       |
| wsrep_cluster_size               | 2                                       |
| wsrep_cluster_state_uuid         | ac97f711-cad5-11e8-8f39-be9d0594cdb9    |
| wsrep_cluster_status             | Primary                                 |
| wsrep_connected                  | ON                                      |
| wsrep_local_bf_aborts            | 0                                       |
| wsrep_local_index                | 0                                       |
| wsrep_provider_name              | Galera                                  |
| wsrep_provider_vendor            | Codership Oy <[email protected]>       |
| wsrep_provider_version           | 3.31(rf216443)                          |
| wsrep_ready                      | ON                                      |
+----------------------------------+-----------------------------------------+
71 rows in set (0.01 sec)

以下是无法加入的第三个节点的错误日志中的错误:

backup-v2|10.1.5.102:4444/xtrabackup_sst//1
2018-10-11T09:20:03.278884-00:00 2 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 2) (Increment: 1 -> 3)
2018-10-11T09:20:03.278997-00:00 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:03.279155-00:00 2 [Note] WSREP: Assign initial position for certification: 69, protocol version: 4
2018-10-11T09:20:03.279626-00:00 0 [Note] WSREP: Service thread queue flushed.
2018-10-11T09:20:03.280052-00:00 2 [Note] WSREP: Check if state gap can be serviced using IST
2018-10-11T09:20:03.280145-00:00 2 [Note] WSREP: Local state seqno is undefined (-1)
2018-10-11T09:20:03.280445-00:00 2 [Note] WSREP: State gap can't be serviced using IST. Switching to SST
2018-10-11T09:20:03.280510-00:00 2 [Note] WSREP: Failed to prepare for incremental state transfer: Local state seqno is undefined: 1 (Operation not permitted)
         at galera/src/replicator_str.cpp:prepare_for_IST():549. IST will be unavailable.
2018-10-11T09:20:03.287673-00:00 0 [Note] WSREP: Member 1.0 (db-test-3.pd.local) requested state transfer from '*any*'. Selected 0.0 (db-test-2.pd.local)(SYNCED) as donor.
2018-10-11T09:20:03.287850-00:00 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 69)
2018-10-11T09:20:03.288073-00:00 2 [Note] WSREP: Requesting state transfer: success, donor: 0
2018-10-11T09:20:03.288225-00:00 2 [Note] WSREP: GCache history reset: ac97f711-cad5-11e8-8f39-be9d0594cdb9:0 -> ac97f711-cad5-11e8-8f39-be9d0594cdb9:69
2018-10-11T09:20:38.988120-00:00 0 [Warning] WSREP: 0.0 (db-test-2.pd.local): State transfer to 1.0 (db-test-3.pd.local) failed: -32 (Broken pipe)
2018-10-11T09:20:38.988274-00:00 0 [ERROR] WSREP: gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():766: Will never receive state. Need to abort.
2018-10-11T09:20:38.988366-00:00 0 [Note] WSREP: gcomm: terminating thread
2018-10-11T09:20:38.988493-00:00 0 [Note] WSREP: gcomm: joining thread
2018-10-11T09:20:38.988942-00:00 0 [Note] WSREP: gcomm: closing backend
2018-10-11T09:20:38.995070-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(NON_PRIM,8d59ca0f,3)
memb {
        d3167260,0
        }
joined {
        }
left {
        }
partitioned {
        8d59ca0f,0
        e3def063,0
        }
)
2018-10-11T09:20:38.995334-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view ((empty))
2018-10-11T09:20:38.996612-00:00 0 [Note] WSREP: gcomm: closed
2018-10-11T09:20:38.996837-00:00 0 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Terminated
        2018-10-11T09:20:47.767946+00:00 WSREP_SST: [ERROR] Removing /var/lib/mysql//xtrabackup_galera_info file due to signal
        2018-10-11T09:20:47.788109+00:00 WSREP_SST: [ERROR] Removing  file due to signal
        2018-10-11T09:20:47.808425+00:00 WSREP_SST: [ERROR] ******************* FATAL ERROR ********************** 
        2018-10-11T09:20:47.818240+00:00 WSREP_SST: [ERROR] Error while getting data from donor node:  exit codes: 143 143
        2018-10-11T09:20:47.828411+00:00 WSREP_SST: [ERROR] ****************************************************** 
        2018-10-11T09:20:47.840006+00:00 WSREP_SST: [ERROR] Cleanup after exit with status:32

下面是来自被选为捐赠者的节点的错误:

2018/10/11 09:20:38 socat[22418] E connect(5, AF=2 10.1.5.102:4444, 16): No route to host
        2018-10-11T09:20:38.805798+00:00 WSREP_SST: [ERROR] ******************* FATAL ERROR ********************** 
        2018-10-11T09:20:38.818683+00:00 WSREP_SST: [ERROR] Error while sending data to joiner node:  exit codes: 0 1
        2018-10-11T09:20:38.832059+00:00 WSREP_SST: [ERROR] ****************************************************** 
        2018-10-11T09:20:38.846813+00:00 WSREP_SST: [ERROR] Cleanup after exit with status:32
2018-10-11T09:20:38.985060-00:00 0 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.1.5.102:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.7.23-23-57'  --binlog 'db-test-2-bin' --gtid 'ac97f711-cad5-11e8-8f39-be9d0594cdb9:69' : 32 (Broken pipe)
2018-10-11T09:20:38.985552-00:00 0 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.1.5.102:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.7.23-23-57'  --binlog 'db-test-2-bin' --gtid 'ac97f711-cad5-11e8-8f39-be9d0594cdb9:69' 
2018-10-11T09:20:38.990613-00:00 0 [Warning] WSREP: 0.0 (db-test-2.pd.local): State transfer to 1.0 (db-test-3.pd.local) failed: -32 (Broken pipe)
2018-10-11T09:20:38.990815-00:00 0 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 69)
2018-10-11T09:20:38.997784-00:00 0 [Note] WSREP: declaring e3def063 at tcp://10.1.5.100:4567 stable
2018-10-11T09:20:38.997807-00:00 0 [Note] WSREP: Member 0.0 (db-test-2.pd.local) synced with group.
2018-10-11T09:20:38.998230-00:00 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 69)
2018-10-11T09:20:38.998277-00:00 0 [Note] WSREP: forgetting d3167260 (tcp://10.1.5.102:4567)
2018-10-11T09:20:38.998806-00:00 13 [Note] WSREP: Synchronized with group, ready for connections
2018-10-11T09:20:38.999112-00:00 13 [Note] WSREP: Setting wsrep_ready to true
2018-10-11T09:20:38.999198-00:00 13 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:39.003491-00:00 0 [Note] WSREP: Node 8d59ca0f state primary
2018-10-11T09:20:39.005025-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(PRIM,8d59ca0f,4)
memb {
        8d59ca0f,0
        e3def063,0
        }
joined {
        }
left {
        }
partitioned {
        d3167260,0
        }
)
2018-10-11T09:20:39.005270-00:00 0 [Note] WSREP: Save the discovered primary-component to disk
2018-10-11T09:20:39.009691-00:00 0 [Note] WSREP: forgetting d3167260 (tcp://10.1.5.102:4567)
2018-10-11T09:20:39.010097-00:00 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
2018-10-11T09:20:39.011037-00:00 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9
2018-10-11T09:20:39.019171-00:00 0 [Note] WSREP: STATE EXCHANGE: sent state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9
2018-10-11T09:20:39.021665-00:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9 from 0 (db-test-2.pd.local)
2018-10-11T09:20:39.021786-00:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9 from 1 (db-test-1.pd.local)
2018-10-11T09:20:39.021861-00:00 0 [Note] WSREP: Quorum results:
        version    = 4,
        component  = PRIMARY,
        conf_id    = 3,
        members    = 2/2 (primary/total),
        act_id     = 69,
        last_appl. = 0,
        protocols  = 0/9/3 (gcs/repl/appl),
        group UUID = ac97f711-cad5-11e8-8f39-be9d0594cdb9
2018-10-11T09:20:39.021999-00:00 0 [Note] WSREP: Flow-control interval: [141, 141]
2018-10-11T09:20:39.022058-00:00 0 [Note] WSREP: Trying to continue unpaused monitor
2018-10-11T09:20:39.022774-00:00 17 [Note] WSREP: REPL Protocols: 9 (4, 2)
2018-10-11T09:20:39.023163-00:00 17 [Note] WSREP: New cluster view: global state: ac97f711-cad5-11e8-8f39-be9d0594cdb9:69, view# 4: Primary, number of nodes: 2, my index: 0, protocol version 3
2018-10-11T09:20:39.023209-00:00 17 [Note] WSREP: Setting wsrep_ready to true
2018-10-11T09:20:39.023256-00:00 17 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 3 -> 2)
2018-10-11T09:20:39.023373-00:00 17 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:39.023540-00:00 17 [Note] WSREP: Assign initial position for certification: 69, protocol version: 4
2018-10-11T09:20:39.023832-00:00 0 [Note] WSREP: Service thread queue flushed.
2018-10-11T09:20:44.480289-00:00 0 [Note] WSREP:  cleaning up d3167260 (tcp://10.1.5.102:4567)

当我引导第三个不是它自己的集群时,它运行得很好。但是当我尝试停止另一个集群中的前两个节点并尝试让它们加入新集群时,它们无法加入。我可以从第三个节点 ping 和远程登录前两个集群节点,反之亦然。我什至尝试停止所有节点并从头开始引导集群,但这没有帮助。

这里究竟发生了什么?

mysql percona
  • 1 个回答
  • 910 Views
Martin Hope
Dino Daniel
Asked: 2018-07-13 08:00:28 +0800 CST

Mysql 上的时间点恢复

  • 0

有人可以建议最好的方法来从以 MIXED 格式保存的二进制日志中执行 mysql PITR。

我发现很难识别为避免使用 mysqlbinlog 工具恢复过程而执行的错误查询。

以下:https ://www.percona.com/doc/percona-xtrabackup/LATEST/innobackupex/pit_recovery_ibk.html

mysql percona
  • 1 个回答
  • 89 Views
Martin Hope
Jdeboer
Asked: 2018-05-10 00:37:56 +0800 CST

ProxySQL 用户未连接 - XtraDB

  • 1

我正在尝试建立一个由 3 个节点组成的 xtraDB 集群。现在我已经启动并运行了集群,并且按照说明我正在尝试设置 ProxySQL 以进行负载平衡。

所以我在所有 3 个节点上都安装了 proxySQL。现在我正在尝试使用管理工具对其进行配置。但是每次我运行命令时:

[root@node1 log]# proxysql-admin --config-file=/etc/proxysql-admin.cnf --enable

它回来了:

此脚本将协助配置 ProxySQL(目前仅支持 Percona XtraDB 集群与 ProxySQL 结合使用)

ProxySQL read/write configuration mode is singlewrite
ERROR 1045 (28000): ProxySQL Error: Access denied for user 'proxysql_admin'@'' (using password: YES)
Please check the ProxySQL connection parameters! Terminating.

现在我会说这个错误非常简单。但是,我在 MySQL 集群和 .cnf 文件中都正确定义了用户 proxysql_admin。

这是我的 proxysql-admin.cnf 文件:

# proxysql admin interface credentials.
export PROXYSQL_DATADIR='/var/lib/proxysql'
export PROXYSQL_USERNAME='proxysql_admin'
export PROXYSQL_PASSWORD='placeholder_pass'
export PROXYSQL_HOSTNAME='localhost'
export PROXYSQL_PORT='6032'

# PXC admin credentials for connecting to pxc-cluster-node.
export CLUSTER_USERNAME='proxysql_admin'
export CLUSTER_PASSWORD='placeholder_pass'
#export CLUSTER_HOSTNAME='localhost'
export CLUSTER_HOSTNAME='ccloud'
export CLUSTER_PORT='3306'

# proxysql monitoring user. proxysql admin script will create this user in pxc to monitor pxc-nodes.
export MONITOR_USERNAME='monitor'
export MONITOR_PASSWORD='placeholder_pass'

# Application user to connect to pxc-node through proxysql
export CLUSTER_APP_USERNAME='proxysql_user'
export CLUSTER_APP_PASSWORD='placeholder_pass'

# ProxySQL read/write hostgroup 
export WRITE_HOSTGROUP_ID='10'
export READ_HOSTGROUP_ID='11'

# ProxySQL read/write configuration mode.
export MODE="singlewrite"

# ProxySQL Cluster Node Priority File
export HOST_PRIORITY_FILE=$PROXYSQL_DATADIR/host_priority.conf

这是mysql集群上的用户:

mysql> select User,Host from mysql.user;
+----------------+-----------+
| User           | Host      |
+----------------+-----------+
| proxysql_admin |           |
| proxysql_admin | %         |
| mysql.session  | localhost |
| mysql.sys      | localhost |
| proxysql_admin | localhost |
| proxysql_user  | localhost |
| root           | localhost |
| sstuser        | localhost |
+----------------+-----------+

我已经尝试过几次更改接口,但这似乎不是问题。任何人都知道为什么我的用户没有连接?

以防万一,这是我正在尝试的节点上集群的 my.cnf:

#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#   Please make any edits and changes to the appropriate sectional files
#   included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/

[mysqld]
server-id=1
datadir=/mysql-data
socket=/mysql-data/mysql.sock
pid-file=/var/run/mysqld/mysqld.pid

wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

wsrep_cluster_name=ccloud
wsrep_cluster_address=gcomm://ip1,ip2,ip3

wsrep_node_name=pxc1
wsrep_node_address=ip1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:placeholder_pass

pxc_strict_mode=ENFORCING

binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
percona xtradb-cluster
  • 2 个回答
  • 2207 Views
Martin Hope
Mohd Abdul Mujib
Asked: 2018-04-21 10:36:33 +0800 CST

MySQL XtraDB/Galera 集群高可用性 RamDisk *SuperFast*

  • 1

好的,所以这将是一个推测性问题,主要是面向设计的,而且是一个相当长的问题。如果我是你,我会喝杯咖啡。

前言:所以我一直在研究数据库,想要一个非常快速(就像真的一样)的数据库(引擎),具有以下必备条件,

  1. 酸合规
  2. In-Memory-ish 用于超快的 IO。
  3. 持久性(嗯......呃)
  4. 可扩展为集群/主从/等
  5. 高可用性(HA)
  6. MySQL直接替换
  7. 开源
  8. 应在商品服务器 (IYKWIM) 上运行

所以从我过于乐观的要求列表来看,你已经跳到......嗯

如何加速mysql,慢查询

好吧,开个玩笑吧,我知道如果innodb_buffer_pool_size调整得当,大部分时间它都会用完内存,但我说

它不是在内存中哟!

但是你会说嘿,它的 2k18 人可能已经创建了一些 100% 的内存数据库,对吧?嗯...实际上他们有,但每个人都有自己的权衡。

  1. VoltDB Community Edition 一切似乎都很好,直到您意识到它不是替代品。它需要 java 中的一些存储过程式命令,这需要您重写整个应用程序或至少 php 应用程序的 db 层/驱动程序/等。所以?交易破坏者!

  2. MemSQL ,嗯,这似乎是我们“有史以来最佳开源内存可扩展 SQL Acid DB ”竞赛的有力竞争者。只为,memSQL 老大就像...

MemSQL 服务器要求

不用说,memSQL 至少需要 4 个内核和 8Gigs 的 RAM,推荐的4 个内核和每个内核32 gigs 是非常疯狂的!!!!此外,memSQL 的社区版本(顺便说一句,它不是完全开源的!它只是免费的)不支持高可用性,因为它是一项付费功能。还有它的NoSQL。所以?交易破坏者!

  1. 所有其他 NoSQLish 数据库,如 membase、Redis、Memcached 等,都被排除在外。

所以现在我的天才想法!

我想知道我们是否可以运行一个 XtraDB/Galera 集群,所有实例都从带有常规快照的RAMDisc运行?

它勾选了所有复选框。

听我说完,首先向房间里的大象讲话,我们知道从 RAMDiscs 运行完整的 mysql Dbs 是相当嗯......大胆,用最礼貌的方式。那么如果服务器崩溃/关闭/等等我们会丢失一个节点会发生什么。虽然我们所有的 DB Cluster 作为一个整体仍然活着并且在踢**。我们所要做的就是从上一个快照启动数据库并与集群同步回来,顺便说一句,集群本身就非常擅长!

好吧,伙计们,不要对我太过挑剔,如果你发现我的实现有缺陷,请指导我。

mysql percona
  • 2 个回答
  • 610 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目

    • 12 个回答
  • Marko Smith

    如何让sqlplus的输出出现在一行中?

    • 3 个回答
  • Marko Smith

    选择具有最大日期或最晚日期的日期

    • 3 个回答
  • Marko Smith

    如何列出 PostgreSQL 中的所有模式?

    • 4 个回答
  • Marko Smith

    列出指定表的所有列

    • 5 个回答
  • Marko Smith

    如何在不修改我自己的 tnsnames.ora 的情况下使用 sqlplus 连接到位于另一台主机上的 Oracle 数据库

    • 4 个回答
  • Marko Smith

    你如何mysqldump特定的表?

    • 4 个回答
  • Marko Smith

    使用 psql 列出数据库权限

    • 10 个回答
  • Marko Smith

    如何从 PostgreSQL 中的选择查询中将值插入表中?

    • 4 个回答
  • Marko Smith

    如何使用 psql 列出所有数据库和表?

    • 7 个回答
  • Martin Hope
    Jin 连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目 2014-12-02 02:54:58 +0800 CST
  • Martin Hope
    Stéphane 如何列出 PostgreSQL 中的所有模式? 2013-04-16 11:19:16 +0800 CST
  • Martin Hope
    Mike Walsh 为什么事务日志不断增长或空间不足? 2012-12-05 18:11:22 +0800 CST
  • Martin Hope
    Stephane Rolland 列出指定表的所有列 2012-08-14 04:44:44 +0800 CST
  • Martin Hope
    haxney MySQL 能否合理地对数十亿行执行查询? 2012-07-03 11:36:13 +0800 CST
  • Martin Hope
    qazwsx 如何监控大型 .sql 文件的导入进度? 2012-05-03 08:54:41 +0800 CST
  • Martin Hope
    markdorison 你如何mysqldump特定的表? 2011-12-17 12:39:37 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 对 SQL 查询进行计时? 2011-06-04 02:22:54 +0800 CST
  • Martin Hope
    Jonas 如何从 PostgreSQL 中的选择查询中将值插入表中? 2011-05-28 00:33:05 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 列出所有数据库和表? 2011-02-18 00:45:49 +0800 CST

热门标签

sql-server mysql postgresql sql-server-2014 sql-server-2016 oracle sql-server-2008 database-design query-performance sql-server-2017

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve