AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / dba / 问题

Perguntas[percona](dba)

Martin Hope
laimison
Asked: 2023-01-12 17:31:01 +0800 CST

comando: apply_migration, originalError: Erro 1845: LOCK=NONE não é suportado para esta operação. Experimente LOCK=SHARED

  • 5

Eu recebi este erro durante a migração do esquema do aplicativo

{"timestamp":"2023-01-11 11:53:09.043 Z","level":"fatal","msg":"Failed to apply database migrations.","caller":"sqlstore/store.go:169","error":"driver: mysql, message: failed when applying migration, command: apply_migration, originalError: Error 1845: LOCK=NONE is not supported for this operation. Try LOCK=SHARED., query: \n\nSET @preparedStatement = (SELECT IF(\n    (\n        SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS\n        WHERE table_name = 'Posts'\n        AND table_schema = DATABASE()\n        AND index_name = 'idx_posts_create_at_id'\n    ) > 0,\n    'SELECT 1;',\n    'CREATE INDEX idx_posts_create_at_id on Posts(CreateAt, Id) LOCK=NONE;'\n));\n\nPREPARE createIndexIfNotExists FROM @preparedStatement;\nEXECUTE createIndexIfNotExists;\nDEALLOCATE PREPARE createIndexIfNotExists;\n\n"}

Tenho alguma opção para aplicar o novo esquema e usar o aplicativo como de costume?

Obrigado

percona
  • 1 respostas
  • 16 Views
Martin Hope
Mike
Asked: 2022-05-14 22:31:56 +0800 CST

Percona MySQL 5.7 a 8.0

  • 0

Gostaria de saber se alguém atualizou com sucesso o percona 5.7 para 8.0 com um banco de dados totalmente preenchido. Todos os artigos que li sugerem que é possível, mas tentei duas vezes em um ambiente de teste e, enquanto a atualização acontece, ela trava durante o início do serviço e nunca fornece uma indicação verdadeira do motivo.

É um grande servidor de banco de dados multi com mais de 100dbs em mais de 150gb.

Eu gostaria de ter o processo acertado porque quando se trata de viver, os servidores estão em um cluster de replicação. Eu li para começar com os escravos primeiro, o que é bom, mas mesmo isso sugere que eu deveria ser capaz de atualizar com dados.

Ao verificar se as tabelas estão corretas para atualização, todas elas retornam OK.

mysql-5.7 percona
  • 1 respostas
  • 90 Views
Martin Hope
ParoX
Asked: 2020-12-11 09:53:17 +0800 CST

Posso desabilitar binlogs para economizar espaço temporariamente

  • 2

Estou usando percona mysql 8.

Não uso nenhum tipo de replicação, mas li que os logs binários são úteis para recuperação de dados. Eu gostaria de desligar o binlogging e liberá-los enquanto corro pt-online-schema-changepara fazer a tabela OPTIMIZE sem impacto.

Depois disso, quero ativar o binlogging novamente (e, em seguida, fazer esforços para mover para um servidor com mais espaço).

Isso é seguro e recomendado? Eu preciso otimizar uma tabela e não posso ficar offline e fazer uma cópia da tabela me deixará sem espaço, a menos que eu remova os 50 GB de binlogs

mysql percona
  • 3 respostas
  • 1489 Views
Martin Hope
GuruBob
Asked: 2019-06-02 17:31:07 +0800 CST

Escravo de replicação MySQL usando percona e docker

  • 0

Estou tentando executar um escravo de replicação do MySQL em um contêiner docker. Estamos executando o MySQL 5.7.24-27-log em produção e é do repositório percona (Ubuntu 18.04).

Eu costumava xtrabackupfazer backup, preparar e enviar um conjunto de dados inicial para replicação e, em seguida, iniciei a imagem do docker percona ( docker pull percona) assim:

$ docker run --name mysql-replication -v /replication/data:/var/lib/mysql -v /replication/docker.cnf:/etc/mysql/docker.cnf:ro -e MYSQL_ROOT_PASSWORD=xxxx -P -d percona

Meu docker.cnf simplesmente anota o server-id (eu copiei da perconaimagem).

[mysqld]
skip-host-cache
skip-name-resolve
bind-address    = 0.0.0.0
server-id       = 4

Depois de usar CHANGE MASTERetc. Eu tenho a replicação funcionando muito bem.

Minha intenção (de acordo com o volume mount -v /replication/data:/var/lib/mysql) é manter todos os dados do MySQL na máquina host e tratar o contêiner docker de replicação como efêmero, ou seja, nenhum estado mantido no contêiner. Também deve ser fácil iniciar outro contêiner de replicação, caso eu precise de um, interrompendo o contêiner existente, copiando os dados em outro lugar, alterando server-ide executando um novo contêiner.

Para testar isso, depois que ele foi configurado e executado corretamente (eu assisti Seconds_Behind_Masterao menu suspenso 0), imaginei que deveria ser capaz de excluir o contêiner e recriá-lo, e a replicação ainda funcionaria bem. Por isso, tentei isso:

$ docker stop mysql-replication
$ docker rm mysql-replication
$ docker run ... // same command as before

Quando faço isso e me conecto ao MySQL em execução no contêiner, descubro que Slave_IO_Runningé No, e depois de iniciá-lo ( START SLAVE;) recebo o seguinte (como visto em SHOW SLAVE STATUS;):

Last_Error: Could not execute Update_rows event on table databasename.tablename; Can't find record in 'tablename', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000681, end_log_pos 9952

( databasenamee tablenamesão nomes reais de banco de dados e tabelas)

No começo, pensei que provavelmente tinha estragado alguma coisa, mas tentei isso várias vezes agora para tentar resolver o problema. Using docker diff mysql-replicationnão mostra alterações no contêiner em execução que parecem ser significativas:

$ docker diff mysql-replication 
C /run
C /run/mysqld
A /run/mysqld/mysqld.pid
C /var
C /var/log
A /var/log/mysql

A pesquisa no Google sugeriu que eu preciso usar RESET SLAVE;e, START SLAVE;mas isso não parece resolvê-lo - é como se os dados (fora do contêiner) não estivessem mais sincronizados com o mestre e, portanto, a replicação não pudesse continuar.

Alguém pode escolher buracos no que estou fazendo, por favor?

Muito obrigado.

replication percona
  • 1 respostas
  • 565 Views
Martin Hope
Viet
Asked: 2019-03-07 02:29:27 +0800 CST

Falha ao reingressar no cluster Percona Xtradb cluster 56 node 2

  • 0

Eu tenho o seguinte cluster de banco de dados:

Nó 1: Percona-XtraDB-Cluster-56-5.6.39-26.25.1

Nó 2: Percona-XtraDB-Cluster-56-5.6.39-26.25.1

Nó 3: Percona-XtraDB-Cluster-56-5.6.41-28.28.1

Eu começo na ordem Nó 1, Nó 2 Nó 3. Cluster iniciado e funciona corretamente. Mas agora meu node1 e node2 travam. Meu cluster (somente nó 3) continua funcionando corretamente.

Mas não consigo iniciar outro nó como o Nó 2 e sincronizar com o cluster. Eu tentei limpar o diretório de dados do banco de dados e sincronizar desde o início, mas ainda falhei.

Segue o log de erros:

Nó 2: início do serviço mysql

2019-03-06 16:40:32 1007281 [Note] WSREP: Setting wsrep_ready to false
2019-03-06 16:40:32 1007281 [Note] WSREP: Read nil XID from storage engines, skipping position init
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/libgalera_smm.so'
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_load(): Galera 3.25(rac090bc) by Codership Oy <[email protected]> loaded successfully.
2019-03-06 16:40:32 1007281 [Note] WSREP: CRC-32C: using hardware acceleration.
2019-03-06 16:40:32 1007281 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootstrap: 1
2019-03-06 16:40:32 1007281 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 10.58.49.161; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT
2019-03-06 16:40:32 1007281 [Note] WSREP: GCache history reset: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:0 -> 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:32 1007281 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2019-03-06 16:40:32 1007281 [Note] WSREP: wsrep_sst_grab()
2019-03-06 16:40:32 1007281 [Note] WSREP: Start replication
2019-03-06 16:40:32 1007281 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:32 1007281 [Note] WSREP: protonet asio version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: Using CRC-32C for message checksums.
2019-03-06 16:40:32 1007281 [Note] WSREP: backend: asio
2019-03-06 16:40:32 1007281 [Note] WSREP: gcomm thread scheduling priority set to other:0 
2019-03-06 16:40:32 1007281 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2019-03-06 16:40:32 1007281 [Note] WSREP: restore pc from disk failed
2019-03-06 16:40:32 1007281 [Note] WSREP: GMCast version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2019-03-06 16:40:32 1007281 [Note] WSREP: EVS version 0
2019-03-06 16:40:32 1007281 [Note] WSREP: gcomm: connecting to group 'my_centos_cluster', peer '10.58.49.162:'
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') connection established to 879d48d1 tcp://10.58.49.162:4567
2019-03-06 16:40:32 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
2019-03-06 16:40:32 1007281 [Note] WSREP: declaring 879d48d1 at tcp://10.58.49.162:4567 stable
2019-03-06 16:40:32 1007281 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:32 1007281 [Note] WSREP: view(view_id(PRIM,879d48d1,22) memb {
    879d48d1,0
    e27e1564,0
} joined {
} left {
} partitioned {
})
2019-03-06 16:40:32 1007281 [Note] WSREP: save pc into disk
2019-03-06 16:40:33 1007281 [Note] WSREP: gcomm: connected
2019-03-06 16:40:33 1007281 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2019-03-06 16:40:33 1007281 [Note] WSREP: Opened channel 'my_centos_cluster'
2019-03-06 16:40:33 1007281 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
2019-03-06 16:40:33 1007281 [Note] WSREP: Waiting for SST to complete.
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: sent state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 0 (v-connect-03)
2019-03-06 16:40:33 1007281 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 1 (localhost.localdomain)
2019-03-06 16:40:33 1007281 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 21,
    members    = 1/2 (joined/total),
    act_id     = 143386057,
    last_appl. = -1,
    protocols  = 0/8/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:33 1007281 [Note] WSREP: Flow-control interval: [23, 23]
2019-03-06 16:40:33 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 143386057)
2019-03-06 16:40:33 1007281 [Note] WSREP: State transfer required: 
    Group state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057
    Local state: 00000000-0000-0000-0000-000000000000:-1
2019-03-06 16:40:33 1007281 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057, view# 22: Primary, number of nodes: 2, my index: 1, protocol version 3
2019-03-06 16:40:33 1007281 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:33 1007281 [Warning] WSREP: Gap in state sequence. Need state transfer.
2019-03-06 16:40:33 1007281 [Note] WSREP: Setting wsrep_ready to false
2019-03-06 16:40:33 1007281 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.58.49.161' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1007281'  '' '
WSREP_SST: [INFO] Streaming with xbstream (2019-03-06 16:40:33)
WSREP_SST: [INFO] Using socat as streamer (2019-03-06 16:40:33)
WSREP_SST: [INFO] Stale sst_in_progress file: /var/lib/mysql//sst_in_progress (2019-03-06 16:40:33)
WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr,retry=30 stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:33)
2019-03-06 16:40:33 1007281 [Note] WSREP: Prepared SST request: xtrabackup-v2|10.58.49.161:4444/xtrabackup_sst//1
2019-03-06 16:40:33 1007281 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 1007281 [Note] WSREP: REPL Protocols: 8 (3, 2)
2019-03-06 16:40:33 1007281 [Note] WSREP: Assign initial position for certification: 143386057, protocol version: 3
2019-03-06 16:40:33 1007281 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:33 1007281 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (773c5ba0-1f0e-11e8-8359-366569ddd6b6): 1 (Operation not permitted)
     at galera/src/replicator_str.cpp:prepare_for_IST():535. IST will be unavailable.
2019-03-06 16:40:33 1007281 [Note] WSREP: Member 1.0 (localhost.localdomain) requested state transfer from '*any*'. Selected 0.0 (v-connect-03)(SYNCED) as donor.
2019-03-06 16:40:33 1007281 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 143386070)
2019-03-06 16:40:33 1007281 [Note] WSREP: Requesting state transfer: success, donor: 0
2019-03-06 16:40:33 1007281 [Note] WSREP: GCache history reset: 00000000-0000-0000-0000-000000000000:0 -> 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057
WSREP_SST: [ERROR] Cleanup after exit with status:1 (2019-03-06 16:40:33)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.58.49.161' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1007281'  '' : 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2019-03-06 16:40:34 1007281 [ERROR] WSREP: SST script aborted with error 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2019-03-06 16:40:34 1007281 [ERROR] Aborting

2019-03-06 16:40:34 1007281 [Note] WSREP: Signalling cancellation of the SST request.
2019-03-06 16:40:34 1007281 [Note] WSREP: SST request was cancelled
2019-03-06 16:40:34 1007281 [Note] WSREP: Closing send monitor...
2019-03-06 16:40:34 1007281 [Note] WSREP: Closed send monitor.
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: terminating thread
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: joining thread
2019-03-06 16:40:34 1007281 [Note] WSREP: gcomm: closing backend
2019-03-06 16:40:35 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting off
2019-03-06 16:40:36 1007281 [Note] WSREP: Service disconnected.
2019-03-06 16:40:36 1007281 [Note] WSREP: Waiting to close threads......
2019-03-06 16:40:36 1007281 [Note] WSREP: rollbacker thread exiting
2019-03-06 16:40:37 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') connection to peer 879d48d1 with addr tcp://10.58.49.162:4567 timed out, no messages seen in PT3S
2019-03-06 16:40:37 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.58.49.162:4567 
2019-03-06 16:40:38 1007281 [Note] WSREP: (e27e1564, 'tcp://0.0.0.0:4567') reconnecting to 879d48d1 (tcp://10.58.49.162:4567), attempt 0
2019-03-06 16:40:39 1007281 [Note] WSREP: evs::proto(e27e1564, LEAVING, view_id(REG,879d48d1,22)) suspecting node: 879d48d1
2019-03-06 16:40:39 1007281 [Note] WSREP: evs::proto(e27e1564, LEAVING, view_id(REG,879d48d1,22)) suspected node without join message, declaring inactive
2019-03-06 16:40:39 1007281 [Note] WSREP: view(view_id(NON_PRIM,879d48d1,22) memb {
    e27e1564,0
} joined {
} left {
} partitioned {
    879d48d1,0
})
2019-03-06 16:40:39 1007281 [Note] WSREP: view((empty))
2019-03-06 16:40:39 1007281 [Note] WSREP: gcomm: closed
2019-03-06 16:40:39 1007281 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2019-03-06 16:40:39 1007281 [Note] WSREP: Flow-control interval: [16, 16]
2019-03-06 16:40:39 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:39 1007281 [Note] WSREP: Received NON-PRIMARY.
2019-03-06 16:40:39 1007281 [Note] WSREP: Shifting JOINER -> OPEN (TO: 143386078)
2019-03-06 16:40:39 1007281 [Note] WSREP: Received self-leave message.
2019-03-06 16:40:39 1007281 [Note] WSREP: Flow-control interval: [0, 0]
2019-03-06 16:40:39 1007281 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:39 1007281 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2019-03-06 16:40:39 1007281 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 143386078)
2019-03-06 16:40:39 1007281 [Note] WSREP: RECV thread exiting 0: Success
2019-03-06 16:40:39 1007281 [Note] WSREP: recv_thread() joined.
2019-03-06 16:40:39 1007281 [Note] WSREP: Closing replication queue.
2019-03-06 16:40:39 1007281 [Note] WSREP: Closing slave action queue.
2019-03-06 16:40:39 1007281 [ERROR] WSREP: Certification exception: Unsupported key prefix: : 71 (Protocol error)
     at galera/src/key_set.cpp:throw_bad_prefix():152
2019-03-06 16:40:39 1007281 [Note] WSREP: /usr/sbin/mysqld: Terminated.

Nó 3: atualmente em execução

2019-03-06 16:40:32 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') connection established to e27e1564 tcp://10.58.49.161:4567
2019-03-06 16:40:32 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
2019-03-06 16:40:32 28615 [Note] WSREP: declaring e27e1564 at tcp://10.58.49.161:4567 stable
2019-03-06 16:40:32 28615 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:32 28615 [Note] WSREP: view(view_id(PRIM,879d48d1,22) memb {
    879d48d1,0
    e27e1564,0
} joined {
} left {
} partitioned {
})
2019-03-06 16:40:32 28615 [Note] WSREP: save pc into disk
2019-03-06 16:40:32 28615 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
2019-03-06 16:40:32 28615 [Note] WSREP: STATE_EXCHANGE: sent state UUID: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:32 28615 [Note] WSREP: STATE EXCHANGE: sent state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143
2019-03-06 16:40:32 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 0 (v-connect-03)
2019-03-06 16:40:33 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e2cb78cf-3ff3-11e9-a578-9a611af77143 from 1 (localhost.localdomain)
2019-03-06 16:40:33 28615 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 21,
    members    = 1/2 (joined/total),
    act_id     = 143386057,
    last_appl. = 143386003,
    protocols  = 0/8/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:33 28615 [Note] WSREP: Flow-control interval: [23, 23]
2019-03-06 16:40:33 28615 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:33 28615 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386057, view# 22: Primary, number of nodes: 2, my index: 0, protocol version 3
2019-03-06 16:40:33 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:33 28615 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 1 -> 2)
2019-03-06 16:40:33 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 28615 [Note] WSREP: REPL Protocols: 8 (3, 2)
2019-03-06 16:40:33 28615 [Note] WSREP: Assign initial position for certification: 143386057, protocol version: 3
2019-03-06 16:40:33 28615 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Warning] WSREP: trx protocol version: 4 does not match certification protocol version: 3
2019-03-06 16:40:33 28615 [Note] WSREP: Member 1.0 (localhost.localdomain) requested state transfer from '*any*'. Selected 0.0 (v-connect-03)(SYNCED) as donor.
2019-03-06 16:40:33 28615 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 143386070)
2019-03-06 16:40:33 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:33 28615 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' '
2019-03-06 16:40:33 28615 [Note] WSREP: sst_donor_thread signaled with 0
WSREP_SST: [INFO] Streaming with xbstream (2019-03-06 16:40:33)
WSREP_SST: [INFO] Using socat as streamer (2019-03-06 16:40:33)
WSREP_SST: [INFO] Streaming SST meta-info file before SST (2019-03-06 16:40:33)
WSREP_SST: [INFO] Evaluating xbstream -c ${FILE_TO_STREAM} | socat -u stdio TCP:10.58.49.161:4444,retry=30; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:33)
WSREP_SST: [INFO] Sleeping before data transfer for SST (2019-03-06 16:40:33)
2019-03-06 16:40:35 28615 [Note] WSREP: forgetting e27e1564 (tcp://10.58.49.161:4567)
2019-03-06 16:40:35 28615 [Note] WSREP: Node 879d48d1 state prim
2019-03-06 16:40:35 28615 [Note] WSREP: view(view_id(PRIM,879d48d1,23) memb {
    879d48d1,0
} joined {
} left {
} partitioned {
    e27e1564,0
})
2019-03-06 16:40:35 28615 [Note] WSREP: save pc into disk
2019-03-06 16:40:35 28615 [Note] WSREP: forgetting e27e1564 (tcp://10.58.49.161:4567)
2019-03-06 16:40:35 28615 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
2019-03-06 16:40:35 28615 [Note] WSREP: STATE_EXCHANGE: sent state UUID: e43901d6-3ff3-11e9-bbc9-83ea4282ab29
2019-03-06 16:40:35 28615 [Note] WSREP: STATE EXCHANGE: sent state msg: e43901d6-3ff3-11e9-bbc9-83ea4282ab29
2019-03-06 16:40:35 28615 [Note] WSREP: STATE EXCHANGE: got state msg: e43901d6-3ff3-11e9-bbc9-83ea4282ab29 from 0 (v-connect-03)
2019-03-06 16:40:35 28615 [Note] WSREP: Quorum results:
    version    = 4,
    component  = PRIMARY,
    conf_id    = 22,
    members    = 1/1 (joined/total),
    act_id     = 143386078,
    last_appl. = 143386003,
    protocols  = 0/9/3 (gcs/repl/appl),
    group UUID = 773c5ba0-1f0e-11e8-8359-366569ddd6b6
2019-03-06 16:40:35 28615 [Note] WSREP: Flow-control interval: [16, 16]
2019-03-06 16:40:35 28615 [Note] WSREP: Trying to continue unpaused monitor
2019-03-06 16:40:35 28615 [Note] WSREP: New cluster view: global state: 773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386078, view# 23: Primary, number of nodes: 1, my index: 0, protocol version 3
2019-03-06 16:40:35 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:40:35 28615 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 2 -> 1)
2019-03-06 16:40:35 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2019-03-06 16:40:35 28615 [Note] WSREP: REPL Protocols: 9 (4, 2)
2019-03-06 16:40:35 28615 [Note] WSREP: Assign initial position for certification: 143386078, protocol version: 4
2019-03-06 16:40:35 28615 [Note] WSREP: Service thread queue flushed.
2019-03-06 16:40:35 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') turning message relay requesting off
2019-03-06 16:40:38 28615 [Note] WSREP: (879d48d1, 'tcp://0.0.0.0:4567') connection established to e27e1564 tcp://10.58.49.161:4567
2019-03-06 16:40:38 28615 [Warning] WSREP: discarding established (time wait) e27e1564 (tcp://10.58.49.161:4567) 
2019-03-06 16:40:40 28615 [Note] WSREP:  cleaning up e27e1564 (tcp://10.58.49.161:4567)
WSREP_SST: [INFO] Streaming the backup to joiner at 10.58.49.161 4444 (2019-03-06 16:40:43)
WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/etc/my.cnf  --defaults-group=mysqld --no-version-check  $INNOEXTRA --galera-info --stream=$sfmt $itmpdir 2>${DATA}/innobackup.backup.log | socat -u stdio TCP:10.58.49.161:4444,retry=30; RC=( ${PIPESTATUS[@]} ) (2019-03-06 16:40:43)
2019/03/06 16:41:13 socat[24873] E connect(3, AF=2 10.58.49.161:4444, 16): Connection refused
WSREP_SST: [ERROR] innobackupex finished with error: 1.  Check /var/lib/mysql//innobackup.backup.log (2019-03-06 16:41:14)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (2019-03-06 16:41:14)
WSREP_SST: [INFO] Cleaning up temporary directories (2019-03-06 16:41:14)
2019-03-06 16:41:14 28615 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' 
2019-03-06 16:41:14 28615 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' : 22 (Invalid argument)
2019-03-06 16:41:14 28615 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.58.49.161:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.6.41-84.1-56'   '' --gtid '773c5ba0-1f0e-11e8-8359-366569ddd6b6:143386070' 
2019-03-06 16:41:14 28615 [Warning] WSREP: Could not find peer: e27e1564-3ff3-11e9-8f94-aa1e9dd03b7f
2019-03-06 16:41:14 28615 [Warning] WSREP: 0.0 (v-connect-03): State transfer to -1.-1 (left the group) failed: -22 (Invalid argument)
2019-03-06 16:41:14 28615 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 143386421)
2019-03-06 16:41:14 28615 [Note] WSREP: Member 0.0 (v-connect-03) synced with group.
2019-03-06 16:41:14 28615 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 143386421)
2019-03-06 16:41:14 28615 [Note] WSREP: Synchronized with group, ready for connections
2019-03-06 16:41:14 28615 [Note] WSREP: Setting wsrep_ready to true
2019-03-06 16:41:14 28615 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

Log: /var/lib/mysql//innobackup.backup.log

190306 16:40:43 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
           At the end of a successful backup run innobackupex
           prints "completed OK!".

190306 16:40:43 Connecting to MySQL server host: localhost, user: sstuser, password: set, port: not set, socket: /var/lib/mysql/mysql.sock
Using server version 5.6.41-84.1-56
innobackupex version 2.3.10 based on MySQL server 5.6.24 Linux (x86_64) (revision id: bd0d4403f36)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql/
xtrabackup: open files limit requested 65535, set to 65535
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 536870912
xtrabackup: using O_DIRECT
innobackupex: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
xtrabackup: Error: write to logfile failed
innobackupex: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
xtrabackup: Error: xtrabackup_copy_logfile() failed.

Como posso juntar o Nó 2 de volta ao cluster?

mysql percona
  • 2 respostas
  • 938 Views
Martin Hope
The Georgia
Asked: 2018-12-24 18:57:15 +0800 CST

Percona PAM com autenticação AD usando SSSD

  • 0

Eu instalei o plugin percona PAm no meu servidor Percona como mostrado abaixo:

mysql> show plugins;
...
| auth_pam                      | ACTIVE   | AUTHENTICATION     | auth_pam.so        | GPL     |
| auth_pam_compat               | ACTIVE   | AUTHENTICATION     | auth_pam_compat.so | GPL     |
+-------------------------------+----------+--------------------+--------------------+---------+

E também configure isso:

cat /etc/pam.d/mysqld 
auth required pam_sss.so
account required pam_sss.so

Eu tenho um grupo no servidor AD chamado "dba", e adicionei um usuário AD 'john.d' neste grupo. Então, eu gostaria de fazer login no MySQL usando usuários de anúncios, por exemplo, john.d, que também devem herdar todos os privilégios concedidos ao grupo "dba". Abaixo está como este grupo AD, "dba", é uma configuração para permitir que seus usuários acessem o servidor Percona:

CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld,dba=dbarole';
CREATE USER 'dbarole'@'%' IDENTIFIED BY 'dbapass';
GRANT ALL PRIVILEGES ON *.* TO 'dbarole'@'%';
GRANT PROXY ON 'dbarole'@'%' TO ''@'';

Quando entro no mysql como dbarole, tudo funciona bem com todos os privilégios concedidos. Mas quando eu logo como john.d, um dos usuários do AD incluído no grupo AD "dba", esse usuário não herda os privilégios (ALL) concedidos ao seu grupo, mas apenas tem o privilégio USAGE conforme mostrado abaixo:

mysql> show grants;
+-----------------------------------+
| Grants for @                      |
+-----------------------------------+
| GRANT USAGE ON *.* TO ''@''       |
| GRANT PROXY ON 'dba'@'%' TO ''@'' |
+-----------------------------------+
2 rows in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
+--------------------+
1 row in set (0.01 sec)

Minha pergunta é, como posso obter privilégios de herança de um usuário AD concedidos ao seu grupo no MySQL?

mysql percona
  • 1 respostas
  • 697 Views
Martin Hope
The Georgia
Asked: 2018-10-12 02:04:11 +0800 CST

Nó MySQL PXC falhando ao receber o estado

  • 1

Eu tenho três nós que quero configurar em um Percona XtraDB Cluster (PXC). Eu inicializei o primeiro nó e entrei no segundo nó, mas de alguma forma não posso ingressar no terceiro nó. Toda a configuração é a mesma que acabei de copiar e colar:

[mysqld]
# Galera
wsrep_cluster_address = gcomm://10.1.5.100,10.1.5.101,10.1.5.102
wsrep_cluster_name = db-test
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_provider_options = "gcache.size=256M"
wsrep_slave_threads = 16 # 2~3 times with CPU
wsrep_sst_auth = "sstuser:sstPwd#123"
wsrep_sst_method = xtrabackup-v2

Estou executando os nós no CentOS 7.x. Abaixo está o status dos dois nós PXC já em funcionamento:

| wsrep_ist_receive_seqno_end      | 0                                       |
| wsrep_incoming_addresses         | 10.1.5.100:3306,10.1.5.101:3306 |
| wsrep_cluster_weight             | 2                                       |
| wsrep_desync_count               | 0                                       |
| wsrep_evs_delayed                |                                         |
| wsrep_evs_evict_list             |                                         |
| wsrep_evs_repl_latency           | 0/0/0/0/0                               |
| wsrep_evs_state                  | OPERATIONAL                             |
| wsrep_gcomm_uuid                 | 8d59ca0f-cd35-11e8-863c-d79869fa6d80    |
| wsrep_cluster_conf_id            | 4                                       |
| wsrep_cluster_size               | 2                                       |
| wsrep_cluster_state_uuid         | ac97f711-cad5-11e8-8f39-be9d0594cdb9    |
| wsrep_cluster_status             | Primary                                 |
| wsrep_connected                  | ON                                      |
| wsrep_local_bf_aborts            | 0                                       |
| wsrep_local_index                | 0                                       |
| wsrep_provider_name              | Galera                                  |
| wsrep_provider_vendor            | Codership Oy <[email protected]>       |
| wsrep_provider_version           | 3.31(rf216443)                          |
| wsrep_ready                      | ON                                      |
+----------------------------------+-----------------------------------------+
71 rows in set (0.01 sec)

Abaixo está o erro do log de erros do terceiro nó que falhou ao ingressar:

backup-v2|10.1.5.102:4444/xtrabackup_sst//1
2018-10-11T09:20:03.278884-00:00 2 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 2) (Increment: 1 -> 3)
2018-10-11T09:20:03.278997-00:00 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:03.279155-00:00 2 [Note] WSREP: Assign initial position for certification: 69, protocol version: 4
2018-10-11T09:20:03.279626-00:00 0 [Note] WSREP: Service thread queue flushed.
2018-10-11T09:20:03.280052-00:00 2 [Note] WSREP: Check if state gap can be serviced using IST
2018-10-11T09:20:03.280145-00:00 2 [Note] WSREP: Local state seqno is undefined (-1)
2018-10-11T09:20:03.280445-00:00 2 [Note] WSREP: State gap can't be serviced using IST. Switching to SST
2018-10-11T09:20:03.280510-00:00 2 [Note] WSREP: Failed to prepare for incremental state transfer: Local state seqno is undefined: 1 (Operation not permitted)
         at galera/src/replicator_str.cpp:prepare_for_IST():549. IST will be unavailable.
2018-10-11T09:20:03.287673-00:00 0 [Note] WSREP: Member 1.0 (db-test-3.pd.local) requested state transfer from '*any*'. Selected 0.0 (db-test-2.pd.local)(SYNCED) as donor.
2018-10-11T09:20:03.287850-00:00 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 69)
2018-10-11T09:20:03.288073-00:00 2 [Note] WSREP: Requesting state transfer: success, donor: 0
2018-10-11T09:20:03.288225-00:00 2 [Note] WSREP: GCache history reset: ac97f711-cad5-11e8-8f39-be9d0594cdb9:0 -> ac97f711-cad5-11e8-8f39-be9d0594cdb9:69
2018-10-11T09:20:38.988120-00:00 0 [Warning] WSREP: 0.0 (db-test-2.pd.local): State transfer to 1.0 (db-test-3.pd.local) failed: -32 (Broken pipe)
2018-10-11T09:20:38.988274-00:00 0 [ERROR] WSREP: gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():766: Will never receive state. Need to abort.
2018-10-11T09:20:38.988366-00:00 0 [Note] WSREP: gcomm: terminating thread
2018-10-11T09:20:38.988493-00:00 0 [Note] WSREP: gcomm: joining thread
2018-10-11T09:20:38.988942-00:00 0 [Note] WSREP: gcomm: closing backend
2018-10-11T09:20:38.995070-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(NON_PRIM,8d59ca0f,3)
memb {
        d3167260,0
        }
joined {
        }
left {
        }
partitioned {
        8d59ca0f,0
        e3def063,0
        }
)
2018-10-11T09:20:38.995334-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view ((empty))
2018-10-11T09:20:38.996612-00:00 0 [Note] WSREP: gcomm: closed
2018-10-11T09:20:38.996837-00:00 0 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Terminated
        2018-10-11T09:20:47.767946+00:00 WSREP_SST: [ERROR] Removing /var/lib/mysql//xtrabackup_galera_info file due to signal
        2018-10-11T09:20:47.788109+00:00 WSREP_SST: [ERROR] Removing  file due to signal
        2018-10-11T09:20:47.808425+00:00 WSREP_SST: [ERROR] ******************* FATAL ERROR ********************** 
        2018-10-11T09:20:47.818240+00:00 WSREP_SST: [ERROR] Error while getting data from donor node:  exit codes: 143 143
        2018-10-11T09:20:47.828411+00:00 WSREP_SST: [ERROR] ****************************************************** 
        2018-10-11T09:20:47.840006+00:00 WSREP_SST: [ERROR] Cleanup after exit with status:32

E abaixo está o erro do nó que foi escolhido como doador:

2018/10/11 09:20:38 socat[22418] E connect(5, AF=2 10.1.5.102:4444, 16): No route to host
        2018-10-11T09:20:38.805798+00:00 WSREP_SST: [ERROR] ******************* FATAL ERROR ********************** 
        2018-10-11T09:20:38.818683+00:00 WSREP_SST: [ERROR] Error while sending data to joiner node:  exit codes: 0 1
        2018-10-11T09:20:38.832059+00:00 WSREP_SST: [ERROR] ****************************************************** 
        2018-10-11T09:20:38.846813+00:00 WSREP_SST: [ERROR] Cleanup after exit with status:32
2018-10-11T09:20:38.985060-00:00 0 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.1.5.102:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.7.23-23-57'  --binlog 'db-test-2-bin' --gtid 'ac97f711-cad5-11e8-8f39-be9d0594cdb9:69' : 32 (Broken pipe)
2018-10-11T09:20:38.985552-00:00 0 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '10.1.5.102:4444/xtrabackup_sst//1' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --mysqld-version '5.7.23-23-57'  --binlog 'db-test-2-bin' --gtid 'ac97f711-cad5-11e8-8f39-be9d0594cdb9:69' 
2018-10-11T09:20:38.990613-00:00 0 [Warning] WSREP: 0.0 (db-test-2.pd.local): State transfer to 1.0 (db-test-3.pd.local) failed: -32 (Broken pipe)
2018-10-11T09:20:38.990815-00:00 0 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 69)
2018-10-11T09:20:38.997784-00:00 0 [Note] WSREP: declaring e3def063 at tcp://10.1.5.100:4567 stable
2018-10-11T09:20:38.997807-00:00 0 [Note] WSREP: Member 0.0 (db-test-2.pd.local) synced with group.
2018-10-11T09:20:38.998230-00:00 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 69)
2018-10-11T09:20:38.998277-00:00 0 [Note] WSREP: forgetting d3167260 (tcp://10.1.5.102:4567)
2018-10-11T09:20:38.998806-00:00 13 [Note] WSREP: Synchronized with group, ready for connections
2018-10-11T09:20:38.999112-00:00 13 [Note] WSREP: Setting wsrep_ready to true
2018-10-11T09:20:38.999198-00:00 13 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:39.003491-00:00 0 [Note] WSREP: Node 8d59ca0f state primary
2018-10-11T09:20:39.005025-00:00 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(PRIM,8d59ca0f,4)
memb {
        8d59ca0f,0
        e3def063,0
        }
joined {
        }
left {
        }
partitioned {
        d3167260,0
        }
)
2018-10-11T09:20:39.005270-00:00 0 [Note] WSREP: Save the discovered primary-component to disk
2018-10-11T09:20:39.009691-00:00 0 [Note] WSREP: forgetting d3167260 (tcp://10.1.5.102:4567)
2018-10-11T09:20:39.010097-00:00 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
2018-10-11T09:20:39.011037-00:00 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9
2018-10-11T09:20:39.019171-00:00 0 [Note] WSREP: STATE EXCHANGE: sent state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9
2018-10-11T09:20:39.021665-00:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9 from 0 (db-test-2.pd.local)
2018-10-11T09:20:39.021786-00:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb0b1f21-cd36-11e8-8ac8-c60fb82759c9 from 1 (db-test-1.pd.local)
2018-10-11T09:20:39.021861-00:00 0 [Note] WSREP: Quorum results:
        version    = 4,
        component  = PRIMARY,
        conf_id    = 3,
        members    = 2/2 (primary/total),
        act_id     = 69,
        last_appl. = 0,
        protocols  = 0/9/3 (gcs/repl/appl),
        group UUID = ac97f711-cad5-11e8-8f39-be9d0594cdb9
2018-10-11T09:20:39.021999-00:00 0 [Note] WSREP: Flow-control interval: [141, 141]
2018-10-11T09:20:39.022058-00:00 0 [Note] WSREP: Trying to continue unpaused monitor
2018-10-11T09:20:39.022774-00:00 17 [Note] WSREP: REPL Protocols: 9 (4, 2)
2018-10-11T09:20:39.023163-00:00 17 [Note] WSREP: New cluster view: global state: ac97f711-cad5-11e8-8f39-be9d0594cdb9:69, view# 4: Primary, number of nodes: 2, my index: 0, protocol version 3
2018-10-11T09:20:39.023209-00:00 17 [Note] WSREP: Setting wsrep_ready to true
2018-10-11T09:20:39.023256-00:00 17 [Note] WSREP: Auto Increment Offset/Increment re-align with cluster membership change (Offset: 1 -> 1) (Increment: 3 -> 2)
2018-10-11T09:20:39.023373-00:00 17 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-11T09:20:39.023540-00:00 17 [Note] WSREP: Assign initial position for certification: 69, protocol version: 4
2018-10-11T09:20:39.023832-00:00 0 [Note] WSREP: Service thread queue flushed.
2018-10-11T09:20:44.480289-00:00 0 [Note] WSREP:  cleaning up d3167260 (tcp://10.1.5.102:4567)

Quando eu inicializo o terceiro para não ser seu próprio cluster, ele funciona bem. Mas quando tento parar os dois primeiros nós no outro cluster e tento fazer com que eles ingressem no novo cluster, eles não conseguem ingressar. Eu posso fazer ping e telnet nos dois primeiros nós de clusters do terceiro nó e vice-versa. Eu até tentei parar todos os nós e inicializar o cluster do zero, e isso não ajudou.

O que realmente está acontecendo aqui?

mysql percona
  • 1 respostas
  • 910 Views
Martin Hope
Dino Daniel
Asked: 2018-07-13 08:00:28 +0800 CST

Recuperação Point In Time no Mysql

  • 0

Alguém pode aconselhar os melhores métodos para fazer um mysql PITR a partir de logs binários que são salvos no formato MIXED.

Estou achando difícil identificar as consultas erradas que foram executadas para evitar o processo de restauração usando a ferramenta mysqlbinlog.

A seguir: https://www.percona.com/doc/percona-xtrabackup/LATEST/innobackupex/pit_recovery_ibk.html

mysql percona
  • 1 respostas
  • 89 Views
Martin Hope
Jdeboer
Asked: 2018-05-10 00:37:56 +0800 CST

Usuário ProxySQL não conectando - XtraDB

  • 1

Estou tentando configurar um cluster xtraDB composto por 3 nós. Agora eu tenho o cluster em execução e de acordo com as instruções, estou tentando configurar o ProxySQL para balanceamento de carga.

Então eu instalei o proxySQL em todos os 3 nós. E agora estou tentando configurá-lo com a ferramenta de administração. Mas toda vez que eu executo o comando:

[root@node1 log]# proxysql-admin --config-file=/etc/proxysql-admin.cnf --enable

Ele volta com:

Este script ajudará na configuração do ProxySQL (atualmente, apenas o cluster Percona XtraDB em combinação com o ProxySQL é suportado)

ProxySQL read/write configuration mode is singlewrite
ERROR 1045 (28000): ProxySQL Error: Access denied for user 'proxysql_admin'@'' (using password: YES)
Please check the ProxySQL connection parameters! Terminating.

Agora eu diria que o erro é bastante simples. No entanto tenho o usuário proxysql_admin devidamente definido tanto no cluster MySQL quanto nos arquivos .cnf.

Aqui está meu arquivo proxysql-admin.cnf:

# proxysql admin interface credentials.
export PROXYSQL_DATADIR='/var/lib/proxysql'
export PROXYSQL_USERNAME='proxysql_admin'
export PROXYSQL_PASSWORD='placeholder_pass'
export PROXYSQL_HOSTNAME='localhost'
export PROXYSQL_PORT='6032'

# PXC admin credentials for connecting to pxc-cluster-node.
export CLUSTER_USERNAME='proxysql_admin'
export CLUSTER_PASSWORD='placeholder_pass'
#export CLUSTER_HOSTNAME='localhost'
export CLUSTER_HOSTNAME='ccloud'
export CLUSTER_PORT='3306'

# proxysql monitoring user. proxysql admin script will create this user in pxc to monitor pxc-nodes.
export MONITOR_USERNAME='monitor'
export MONITOR_PASSWORD='placeholder_pass'

# Application user to connect to pxc-node through proxysql
export CLUSTER_APP_USERNAME='proxysql_user'
export CLUSTER_APP_PASSWORD='placeholder_pass'

# ProxySQL read/write hostgroup 
export WRITE_HOSTGROUP_ID='10'
export READ_HOSTGROUP_ID='11'

# ProxySQL read/write configuration mode.
export MODE="singlewrite"

# ProxySQL Cluster Node Priority File
export HOST_PRIORITY_FILE=$PROXYSQL_DATADIR/host_priority.conf

Aqui está o usuário no cluster mysql:

mysql> select User,Host from mysql.user;
+----------------+-----------+
| User           | Host      |
+----------------+-----------+
| proxysql_admin |           |
| proxysql_admin | %         |
| mysql.session  | localhost |
| mysql.sys      | localhost |
| proxysql_admin | localhost |
| proxysql_user  | localhost |
| root           | localhost |
| sstuser        | localhost |
+----------------+-----------+

Eu tentei mudar as interfaces algumas vezes, mas isso não parece ser o problema. Alguém tem alguma idéia de por que meu usuário não está se conectando?

Apenas no caso, aqui está o my.cnf do cluster no nó em que estou tentando isso:

#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#   Please make any edits and changes to the appropriate sectional files
#   included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/

[mysqld]
server-id=1
datadir=/mysql-data
socket=/mysql-data/mysql.sock
pid-file=/var/run/mysqld/mysqld.pid

wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

wsrep_cluster_name=ccloud
wsrep_cluster_address=gcomm://ip1,ip2,ip3

wsrep_node_name=pxc1
wsrep_node_address=ip1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:placeholder_pass

pxc_strict_mode=ENFORCING

binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
percona xtradb-cluster
  • 2 respostas
  • 2207 Views
Martin Hope
Mohd Abdul Mujib
Asked: 2018-04-21 10:36:33 +0800 CST

MySQL XtraDB/Galera Cluster de Alta Disponibilidade RamDisk *SuperFast*

  • 1

Ok, então esta será uma questão especulativa, principalmente orientada para o design e bastante longa. Eu pegaria um cuppa' covfefe , se eu fosse você.

Prefácio: Então, eu tenho pesquisado bancos de dados e queria um banco de dados (motor) realmente rápido ( como realmente muito ) com os seguintes itens obrigatórios ,

  1. Conformidade com ACID
  2. In-Memory-ish para IO extremamente rápido.
  3. Persistente ( bem... duh )
  4. Escalável como em cluster/master-slave/etc
  5. Alta Disponibilidade (HA)
  6. Substituição do MySQL Drop-In
  7. Código aberto
  8. Deve ser executado no servidor commodity (IYKWIM)

Então, a julgar pela minha lista otimista de requisitos, você já pularia para ....ummm

Como acelerar o mysql, consultas lentas

Tudo bem, tudo bem, piadas à parte, eu sei que se innodb_buffer_pool_sizefor ajustado corretamente, vai ficar sem memória na maioria das vezes, mas eu digo

Não está na memória yo!

Mas você diria Ei, seu pessoal de 2k18 já deve ter criado cerca de 100% em BDs de memória, certo? Umm... na verdade eles têm, mas cada um tem suas próprias vantagens.

  1. VoltDB Community Edition Tudo parece bem até que você perceba que não é uma substituição fácil. Ele precisa de alguns procedimentos armazenados - comandos ish em java que exigem que você reescreva todo o seu aplicativo ou pelo menos o db layer/driver/etc do seu aplicativo php. Então? EMPECILHO!!!

  2. MemSQL , Bem, isso parece um forte concorrente para o nosso concurso " Best OpenSource In-Mem Scalable SQL Acid DB de todos os tempos ". Apenas para, o memSQL Boss ser como...

Requisitos do MemSQL Server

Escusado será dizer que o memSQL precisa de pelo menos 4 núcleos e 8 GB de RAM no mínimo, e o recomendado é bastante insano em 4 núcleos e 32 GB por núcleo !!!! Além disso, a versão da comunidade do memSQL ( que, aliás, não é totalmente OpenSource!, é apenas gratuita ) não suporta alta disponibilidade, pois é um recurso pago. Também é NoSQL. Então? EMPECILHO!!!

  1. Todos os outros dbs NoSQLish, como membase, Redis, Memcached, etc, são praticamente descartados.

Então agora a minha ideia genial!!!

Eu queria saber se poderíamos executar um cluster XtraDB/Galera com todas as instâncias sendo executadas em um RAMDisc com instantâneos regulares?

Ele recebe todas as caixas de seleção marcadas.

Apenas me ouça, Primeiro me dirigindo ao elefante na sala, Nós sabemos que rodar mysql Dbs completo de RAMDiscs é bem umm... ousado, colocado da maneira mais educada possível. Então, o que acontece se o servidor travar/desligar/etc, perdermos um nó. Enquanto todo o nosso DB Cluster como um todo ainda está vivo e chutando a**. Tudo o que temos que implementar é inicializar o banco de dados a partir do último instantâneo e sincronizar de volta com o cluster no qual os clusters são muito bons, inerentemente!

OK, espreitadelas, não sejam salgados comigo, se virem uma falha na minha implementação, oriente-me.

mysql percona
  • 2 respostas
  • 610 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    conectar ao servidor PostgreSQL: FATAL: nenhuma entrada pg_hba.conf para o host

    • 12 respostas
  • Marko Smith

    Como fazer a saída do sqlplus aparecer em uma linha?

    • 3 respostas
  • Marko Smith

    Selecione qual tem data máxima ou data mais recente

    • 3 respostas
  • Marko Smith

    Como faço para listar todos os esquemas no PostgreSQL?

    • 4 respostas
  • Marko Smith

    Listar todas as colunas de uma tabela especificada

    • 5 respostas
  • Marko Smith

    Como usar o sqlplus para se conectar a um banco de dados Oracle localizado em outro host sem modificar meu próprio tnsnames.ora

    • 4 respostas
  • Marko Smith

    Como você mysqldump tabela (s) específica (s)?

    • 4 respostas
  • Marko Smith

    Listar os privilégios do banco de dados usando o psql

    • 10 respostas
  • Marko Smith

    Como inserir valores em uma tabela de uma consulta de seleção no PostgreSQL?

    • 4 respostas
  • Marko Smith

    Como faço para listar todos os bancos de dados e tabelas usando o psql?

    • 7 respostas
  • Martin Hope
    Jin conectar ao servidor PostgreSQL: FATAL: nenhuma entrada pg_hba.conf para o host 2014-12-02 02:54:58 +0800 CST
  • Martin Hope
    Stéphane Como faço para listar todos os esquemas no PostgreSQL? 2013-04-16 11:19:16 +0800 CST
  • Martin Hope
    Mike Walsh Por que o log de transações continua crescendo ou fica sem espaço? 2012-12-05 18:11:22 +0800 CST
  • Martin Hope
    Stephane Rolland Listar todas as colunas de uma tabela especificada 2012-08-14 04:44:44 +0800 CST
  • Martin Hope
    haxney O MySQL pode realizar consultas razoavelmente em bilhões de linhas? 2012-07-03 11:36:13 +0800 CST
  • Martin Hope
    qazwsx Como posso monitorar o andamento de uma importação de um arquivo .sql grande? 2012-05-03 08:54:41 +0800 CST
  • Martin Hope
    markdorison Como você mysqldump tabela (s) específica (s)? 2011-12-17 12:39:37 +0800 CST
  • Martin Hope
    Jonas Como posso cronometrar consultas SQL usando psql? 2011-06-04 02:22:54 +0800 CST
  • Martin Hope
    Jonas Como inserir valores em uma tabela de uma consulta de seleção no PostgreSQL? 2011-05-28 00:33:05 +0800 CST
  • Martin Hope
    Jonas Como faço para listar todos os bancos de dados e tabelas usando o psql? 2011-02-18 00:45:49 +0800 CST

Hot tag

sql-server mysql postgresql sql-server-2014 sql-server-2016 oracle sql-server-2008 database-design query-performance sql-server-2017

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve